Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - monirul

Pages: 1 2 [3] 4
31
Very Nice and Informative post.

32
Very Nice and Informative post.

33
It’s that time of year again, and the Raspberry Pi Foundation has some new hardware for you. This time, it’s an improved version of the Raspberry Pi Model A, bringing it the speed and power of its bigger brother, the Raspberry Pi Model 3 B+.

The Raspberry Pi Model A is the weird middle child of the Raspberry Pi lineup, or maybe it’s the Goldilocks choice. It’s not as powerful and doesn’t have the USB ports or Ethernet jack found in the latest revision of the family, the Raspberry Pi Model 3 B+, and it’s not as small or as cheap as the Raspberry Pi Zero W. If you’re running a Pi as just something that takes in power and spits out data on the GPIO pins, the Model A might be all you need.

The full specs include:

Broadcom BCM2837B0 Cortex A-53 running at 1.4GHz
512 MB of LPDDR2 SRAM
2.4 GHz and 5 GHz 802.11 b/g/n/ac wireless LAN, Bluetooth 4.2/BLE
Full size HDMI
MIPI DSI display port / CSI camera port
Stereo Output and composite video port
In short, we’re looking at a cut-down version of the Raspberry Pi Model 3 B+ released earlier this year, without an Ethernet port and only one USB port. The wireless chipset is hidden under a lovely embossed can, and until we get our hands on this new model and a pair of pliers, we’re assuming this is a CYW43455, the Cypress chipset found in the Pi 3 B+.

The price of the Raspberry Pi 3 Model A+ will be $25 USD, with availability soon at the usual retailers. Since there’s no such thing as a Pi Zero 3 yet, if you’re looking for a powerful Linux computer, with wireless, in a small form factor, you’re not going to do much better than this little guy. You could of course desolder a Pi 3 B+, but for now this is the smallest, most powerful single board computer with good software support.

34


NASA says the global dust storm on Mars that incapacitated its Opportunity rover has finally ended. The space agency made the determination based on tau, which is the amount of sunlight that manages to penetrate the hazy dust in the planet’s atmosphere. According to the agency’s latest update, the measurement is now back to normal.


 
NASA revealed the storm’s existence early this past summer, stating in mid-June that it was one of the thickest dust storms scientists had ever observed on the Red Planet. The dust eventually blotted out the sunlight, which Opportunity rover depended on for recharging its batteries.

It didn’t take long before the dust storm became global, circling the entire planet and leaving the older Mars rover unable to function. On August 1, NASA reported that the dust storm was showing signs of abating, and by August 7 the tau reading had dropped to 2.5.

NASA started listening for potential communication from Opportunity rover, soon increasing the number of times it attempted to contact the machine. Unfortunately, the space agency still has not established communication with the rover.
In its most recent update on the Mars mission, NASA revealed that Mars’ tau rating is 0.8, which is a level considered normal and storm-free. The space agency previously stated that dust on Opportunity’s solar panels may be inhibiting its ability to recharged, though there’s no way of knowing exactly why it hasn’t responded.

Experts say Mars is undergoing a windy season and that high winds may blow dust off the rover’s solar panels, assuming that’s the problem. If that happens — and again assuming there are no other issues — Opportunity may be able to recharge its batteries and establish communication with Earth.

35
As Earth's tectonic plates dive beneath one another, they drag three times as much water into the planet's interior as previously thought.


At earthquake-prone subduction zones, where tectonic plates dive beneath one another, massive amounts of sea water are dragged into the planet's interior, a new study has revealed.


Those are the results of a new paper published today (Nov. 14) in the journal Nature. Using the natural seismic rumblings of the earthquake-prone subduction zone at the Marianas trench, where the Pacific plate is sliding beneath the Philippine plate, researchers were able to estimate how much water gets incorporated into the rocks that dive deep below the surface. [In Photos: Ocean Hidden Beneath Earth's Surface]

The find has major ramifications for understanding Earth's deep water cycle, wrote  marine geology and geophysics researcher Donna Shillington of the Lamont-Doherty Earth Observatory at Columbia University in an op-ed accompanying the new paper. Water beneath the surface of the Earth can contribute to the development of magma and can lubricate faults, making earthquakes more likely, wrote Shillington, who was not involved in the new research.

The deep water cycle
Water is stored in the crystalline structure of minerals, Shillington wrote. The liquid gets incorporated into the Earth's crust both when brand-new, piping-hot oceanic plates form and when the same plates bend and crack as they grind under their neighbors. This latter process, called subduction, is the only way water penetrates deep into the crust and mantle, but little is known about how much water moves during the process, study leader Chen Cai of Washington University in St. Louis and his colleagues wrote in their new paper.

"Before we did this study, every researcher knew that water must be carried down by the subducting slab," Cai told Live Science. "But they just didn't know how much water."

The researchers used data picked up by a network of seismic sensors positioned around the central Marianas Trench in the western Pacific Ocean. The deepest part of the trench is nearly 7 miles (11 kilometers) below sea level. The sensors detect earthquakes and the echoes of earthquakes ringing through Earth's crust like a bell. Cai and his team tracked how fast those temblors traveled: A slowdown in velocity, he said, would indicate water-filled fractures in rocks and "hydrated" minerals that lock up water within their crystals.

Missing water
The researchers observed such slowdowns deep into the crust, some 18 miles (30 km) below the surface, Cai said. Using the measured velocities, along with the known temperatures and pressures found there, the team calculated that the subduction zones pull 3 billion teragrams of water into the crust every million years (a teragram is a billion kilograms).

Seawater is heavy; a cube of this water 1 meter (3.3 feet) long on each side would weigh 1,024 kilograms (2,250 lbs.). But still, the amount pulled down by subduction zones is mind-boggling. It's also three times as much water as subduction zones were previously estimated to take in, Cai said.

And that raises some questions: The water that goes down must come up, usually in the contents of volcanic eruptions. The new estimate of how much water is going down is larger than estimates of how much is being emitted by volcanos, meaning scientists are missing something in their estimates, the researchers said.  There is no missing water in the oceans, Cai said. That means the amount of water dragged down into the crust and the amount spouted back out should be about equal. The fact that they aren't suggests that there's something about how water moves through the interior of Earth that scientists don't yet understand. 

"Many more studies need to be focused on this aspect," Cai said.

Originally published on Live Science.

36
Pluto is most definitely a planet — and should never have been downgraded, say some scientists.

Make Pluto great again? That seems to be the aim of a new study that urges Pluto be returned to its former planetary glory.

The research, published in the scientific journal Icarus, says Pluto never should have been downgraded from a planet to a dwarf planet 12 years ago. Why? Because, the authors say, the rationale behind the decision wasn’t valid.

Let’s back up a bit, to 2006. That’s when the International Astronomical Union (IAU), the group that gets to name planetary bodies, established updated rules for what is and what isn’t a planet.


A view of Pluto seen from the surface of its largest moon Charon. (Getty Images)

The IAU defined a planet as a celestial body that orbits the sun, is round or nearly round and “clears the neighborhood” around its orbit.

It’s that last part that’s currently in dispute. The IAU said Pluto was just too small to clear the neighborhood, or knock other space rocks out of its path as it orbits the sun. And so, the astronomical union demoted Pluto to dwarf planet status.

The contention
Philip Metzger has a problem with that. He’s a University of Central Florida planetary scientist and lead author on the study.

“The IAU definition would say that the fundamental object of planetary science, the planet, is supposed to be defined on the basis of a concept that nobody uses in their research,” Metzger said in a statement on the school’s website.

Metzger and his team looked at more than two centuries’ worth of research and found just one study, from the early 19th century, that employed the orbit-clearing standard the IAU used to downgrade Pluto.

And, Metzger also points out, the standard used to classify planets changed in the 1950s after astronomer Gerard Kuiper said what really determines what is and what isn’t a planet is how a celestial body is formed.

Metzger goes a step further, saying a planet should be classified based on if it’s big enough that its gravity allows it to become a spherical shape, according to the school’s statement.

“And that’s not just an arbitrary definition,” he said. “It turns out this is an important milestone in the evolution of a planetary body, because apparently when it happens, it initiates active geology in the body.”

A debate reignited
Pulling Pluto from the ranks of planets has always been a controversial decision.

Back in 2014 Harvard-Smithsonian Center for Astrophysics jumped into the debate: What is a planet? It had some experts discuss the definition of a planet and then let the audience vote. No surprise; they voted that Pluto is a planet.

Metzger said the only planet more complex than Pluto is Earth. And we’ve learned so much more about Pluto after NASA’s New Horizons spacecraft flew past it in 2015. Thanks to New Horizons, we now know that Pluto has dunes made of solid methane ice, mountain peaks covered in methane snow and, possibly, an icy, underwater ocean.

The AIU said there’s a clear way to bring up a motion with the group — “which is to propose an IAU Resolution through the relevant Working Group(s) and Division.”

So far, however, no such resolutions have been proposed, said Lars Lindberg Christensen with the group.

“It is nevertheless good and healthy to debate these topics,” Christensen said.

Maybe, sometime soon, the debate will be settled — and we can all go back to talking about the nine-planet solar system we learned about in school.

Source: CNN WIRE

37

A senior researcher has quit his job at Google in protest over the company’s leaked plans to create a censored web search app for China, codenamed “Dragonfly.”

Jack Poulson, 32, who worked for the research and machine-intelligence department, left Google on Aug. 31 after discussing his concerns with his bosses for several weeks. He felt that resigning was his “ethical responsibility” to protest “the forfeiture of our public human-rights commitments,” he told The Intercept.


The Google logo at the Smart China Expo at Chongqing International Expo Center in Chongqing, China, on Aug. 23, 2018. (STR/AFP/Getty Images)


The Chinese communist regime runs the world’s most sophisticated system of internet censorship and requires foreign companies to censor topics it deems “sensitive,” such as democracy, human rights, and persecution of groups like Tibetans, Falun Gong practitioners, human rights activists, and others. Companies are also forced to share their data stored in China with the regime.

“Due to my conviction that dissent is fundamental to functioning democracies, I am forced to resign, in order to avoid contributing to, or profiting from, the erosion of protection for dissidents,” Poulson wrote in his resignation letter.

Past Censorship
Google ran a censored version of its search engine in China from 2006 to 2010, when the company backed out. Its stated reason for exiting was a cyber attack originating from China that targeted Google email accounts of dozens of Chinese human-rights activists.


Google co-founder Sergey Brin, who was born in Soviet Russia, said in 2010 he saw “some earmarks of totalitarianism” in China, which was “personally quite troubling” to him, The Wall Street Journal reported. The newspaper cited “people familiar with the discussions” as saying that then-Chief Executive Eric Schmidt and others advocated staying in China.

Google Co-Founder Sergey Brin in Seoul, South Korea
Google co-founder Sergey Brin in Seoul, South Korea, on March 12, 2016. (Jung Yeon-Je/AFP/Getty Images)
China has been listed for decades by watchdogs as one of the worst abusers of human rights. Among other atrocities, the regime has killed hundreds of thousands of prisoners of conscience to sell their organs for transplants, based on an extensive body of research produced since allegations of the crime first surfaced in 2006.

Poulson joined Google in May 2016 and worked on “international query analysis,” which aims to improve the accuracy of Google search systems.

He said he joined viewing Google’s withdrawal from China and Brin’s comments of support for individual liberties as a statement of principle. If Google is betraying such principles, he doesn’t want to “be complicit as a shareholder and citizen of the company,” he said.

While Google was applauded by human-rights advocates for its 2010 action, it might have withdrawn for economic reasons. The company struggled to make inroads in the Chinese market, where the regime supports home-grown companies with top cadre connections at the expense of competitors.


Poulson warned that if Google chooses to cave to Chinese censors again, it may embolden other regimes to push their demands, too.

“I view our intent to capitulate to censorship and surveillance demands in exchange for access to the Chinese market as a forfeiture of our values and governmental negotiating position across the globe,” Poulson wrote. “There is an all-too-real possibility that other nations will attempt to leverage our actions in China in order to demand our compliance with their security demands.”

Google Response
Google reportedly tried to keep its China plans secret to all but a few hundred of its 88,000 employees. Once the info leaked, more than 1,400 employees signed a letter demanding an investigation of “urgent moral and ethical issues” raised by the project. Still, Google has declined to confirm the Dragonfly project, despite multiple media reports confirming its existence through unidentified sources.

Sixteen members of Congress, including Democrats and Republicans, said in a letter to Google they have “serious concerns” about the project. The letter asked if Google would take steps “to ensure that individual Chinese citizens or foreigners living in China, including Americans, will not be surveilled or targeted through Google applications.” The company didn’t immediately respond.

Poulson said about four other employees also quit over Dragonfly.

“It’s incredible how little solidarity there is on this,” he said. “It is my understanding that when you have a serious ethical disagreement with an issue, your proper course of action is to resign.”

Reuters contributed to this report.

38
There are fears genetics research into autism will lead to eugenics and eradication of the condition. That must never come to pass, says Simon Baron-Cohen

Genetics plays a large role in causing autism, so knowing more about which genes influence it could allow a better understanding of the condition.

It is a rapidly unfolding area of research, but there is a problem. As director of the University of Cambridge’s Autism Research Centre, I am increasingly aware that more and more autistic people don’t want to take part in genetics studies.

It seems to be happening because of a fear that the agenda is eugenics – find the genes to identify potentially autistic babies in pregnancy, and terminate such pregnancies. These fears are understandable if we look at how this has happened in the case of Down’s syndrome.


Some people also worry that genetics research will lead to genetic engineering to “normalise” autistic people. Again, I would be horrified at this application of science, because it doesn’t respect that people with autism are neurologically different, and like any other kinds of diversity (such as hair, skin or eye colour, handedness, or sexual orientation) should be accepted for who they are.

Opposed to eugenics
My colleagues and I are opposed to any form of eugenics. The worry though is that if people equate autism genetics with a eugenics agenda, valuable progress on autism genetics could be slowed down.

Today, autism is known to be strongly genetic, with heritability estimated at between 60 and 90 per cent. Autism is not 100 per cent genetic – if one identical twin has autism their co-twin doesn’t always have it. The obvious conclusion is that a genetic predisposition interacts with environmental factors.

Since the start of this century – only 18 years ago – 100 “high confidence” genes have been associated with autism. They are called high confidence because they have been identified in multiple labs. These include single rare gene mutations that are sufficient to cause autism, but are found in less than 5 per cent of autistic people.

We now estimate that the 100 that have been found are just the tip of the iceberg, and that between 400 and 1000 genes may be involved in autism.

Furthermore, we now think that at least 50 per cent of the heritability of autism may be due not to rare mutations but to common genetic variants that we all carry, with some versions occurring in different frequencies in autistic people.

To identify common genetic variants requires many tens of thousands of people to take part in studies. For example, we worked with consumer DNA testing company 23andMe recently and discovered several common genetic variants by asking 80,000 people who had sent samples to the company to take an online test. It will take years of collaborative effort to collect this “big data” to discover all the genes relevant to autism.

A large part of that collaboration is with the autism community. If that community loses faith in this research, then progress will slow down. And it won’t just be science labs that are affected.

As scientists, our agenda is simply to understand the causes of autism. This has intrinsic value, because it contributes to human knowledge and will hopefully deepen autistic people’s understanding of their own identity.

Help where desired
Genetic knowledge could also change lives for the better. For example, one clinical use of gene discovery that we think is ethical would be early detection of autism, with a view to early intervention, if parents opt for this.

This isn’t at odds with the “neurodiversity” view, because in an ideal world early interventions would target only symptoms that cause disability or distress, not autism itself. Examples of unwanted symptoms might be language delay, epilepsy, learning difficulties or gastro-intestinal disorder.

Another advantage of early detection might be better early support for children who are vulnerable to becoming teenagers with poor mental health. If you leave an autistic child without the right support and they are expected to cope in an education system that might not fit their learning style, or if they are bullied for being different, you end up with a child who feels like they are failing. Or a child who loses self-confidence because they are abused and manipulated by others with more “street smarts”.

We at the Autism Research Centre have no desire to cure, prevent or eradicate autism. I hope the autism community will be willing to trust researchers who nail their colours to the mast in this way.

Autistic people have a special mix of strengths and challenges. The strengths include excellent attention to detail, memory for detail, pattern recognition and honesty. Our aim, as clinicians as well as scientists, is to make the world a more comfortable place for autistic (and all) people to live in.

39

After decades of damning reports, bleak images, and depressing headlines, one new report claims to have a “positive update” on the Great Barrier Reef (GBR).

The Reef & Rainforest Research Centre (RRRC), a non-profit organization, has published a report for the Queensland State Government that claims parts of the GBR are showing some “signification signs” of recovery from years of bleaching.

Don’t crack out the champagne just yet, though - the future of the world's largest coral reef (or any coral reef, for that matter) is still not looking rosy. At all.

While scientists and policymakers have been working hard to support the reefs, this recent development is primarily thanks to of a milder 2017-18 summer. The welcoming weather has allowed parts of the reef to regain some of its health following the catastrophic bleaching events of 2016 and 2017, but all it takes is another bad reason and it’s back to square one.

“Saxon Reef, for example, suffered some form of bleaching on 47.1 percent of its live coral cover during the 2016 event. Fortunately, much of the bleached coral recovered thanks to better conditions experienced in 2018,” Sheriden Morris, RRRC Managing Director, said in a statement.

“However, this recovery is always going to be contingent on environmental conditions.”

A section of bleached coral. Richard Whitcombe/Shutterstock
“We all know that the reef may suffer further bleaching events as the climate continues to warm, but we have to do everything we possibly can to help protect our Great Barrier Reef,” he warned

Coral have a mutually beneficial relationship with microalgae that live in their tissues. The corals provide protection and extra surface area, the photosynthetic algae provide the “food.” If the algae become stressed by disease, pollution, or temperatures, then the algae leave the coral. Along with losing their vibrant rich coloring, the corals will lose an important energy source, becoming weak and susceptible to disease. Fortunately, corals do have a significant capacity to bounce back from this damage.

Morris explained, “It is important to realize that bleaching occurs in multiple stages, ranging from the equivalent of a mild sunburn to coral mortality.”

The GBR is much more than a pretty sight. Stretching for over 2,300 kilometers (1,430 miles) down the coastline of Queensland in northeast Australia, it is the world's largest coral reef system by some margin. Coral reefs, in general, contain almost a third of the world's known marine biodiversity, from giant turtles to teeny seahorses.

For Queensland, it’s also a great source of touris

40
MIT researchers develop inexpensive way to perform full lifecycle analysis of design choices as buildings are being planned.


Typically, when architects or engineers design a new building, it’s only at the end of the process — if ever — that a lifecycle analysis of the building’s environmental impact is carried out. And by then, it may be too late to make significant changes. Now, a faster and easier system for doing such analyses could change all that, making the analysis an integral part of the design process from the beginning.

The new process, described in the journal Building and Environment in a paper by MIT researchers Jeremy Gregory, Franz-Josef Ulm and Randolph Kirchain, and recent graduate Joshua Hester PhD ’18, is simple enough that it could be integrated into the software already used by building designers so that it becomes a seamless addition to their design process.

Lifecycle analysis, known as LCA, is a process of examining all the materials; design elements; location and orientation; heating, cooling, and other energy systems; and expected ultimate disposal of a building, in terms of costs, environmental impacts, or both. Ulm, a professor of civil and environmental engineering and director of MIT’s Concrete Sustainability Hub (CSH), says that typically LCA is applied “only when a building is fully designed, so it is rather a post-mortem tool but not an actual design tool.” That’s what the team set out to correct.

“We wanted to address how to bridge that gap between using LCA at the end of the process and getting architects and engineers to use it as a design tool,” he says. The big question was whether it would be possible to incorporate LCA evaluations into the design process without having it impose too many restrictions on the design choices, thus making it unappealing to the building designers. Ulm wondered, “How much does the LCA restrict the flexibility of the design?”

Measuring freedom of design

To address that question systematically, the team had to come up with a process of measuring the flexibility of design choices in a quantitative way. They settled on a measure they call “entropy,” analogous to the use of that term in physics. In physics, a system with greater entropy is “hotter,” with its molecules moving around rapidly. In the team’s use of the term, higher entropy represents a greater variety of available choices at a given point, while lower entropy represents a more restricted range of choices.

To the researchers’ surprise, they found use of their LCA system had very little impact on reducing the range of design choices. “That’s the most remarkable result,” Ulm says. When introducing the LCA into the early stages of the design process, “you barely touch the design flexibility,” he says. “I was convinced we would come to a compromise,” where design flexibility would have to be limited in order to gain better lifecycle performance, Ulm says. “But in fact, the results proved me wrong.”

The system looks at the full range of climate impacts from a new structure, including all three phases: construction, including examining the embodied energy in all the materials used in the building; operation of the building, including all of the energy sources needed to provide heating, cooling, and electrical service; and the final dismantling and disposal, or repurposing of the structure, at the end of its service.

To evaluate the lifecycle impact of design choices requires looking at a wide range of factors. These include: the location’s climate (for their research, they chose Arizona and New England as two very different cases of U.S. climate); the building’s dimensions and orientation; the ratio of walls to windows on each side; the materials used for walls, foundations, and roofing; the type of heating and cooling systems used; and so on. As each of these factors gets decided, the range of possibilities for the building get narrower and narrower — but not much more so than in any conventional design process.

At any point, the program “would also provide information about a lot of the things that are not yet defined,” essentially offering a menu of choices that could lead to a more environmentally friendly design, says Kirchain, who is a principal research scientist at MIT and co-director of the CSH, which supported the project.

While designed particularly for reducing the climate impact of a building, the same tool could also be used to optimize a building for other criteria, such as simply to minimize cost, the researchers say.

Getting in early

Thinking about issues such as the ultimate fate of a building at the end of its functional life tends to be “not in the same order of interest for the designing architect, when they first work on a design,” compared to more immediate factors such as how the building will look to the client, and meeting any particular functional requirements for the structure, Ulm says. But if the new LCA tools are integrated right into the design software they are using, then indications of how a given design choice can affect the outcome would be constantly available and able to easily influence choices even in small, subtle ways early in the process.

By comparing the design process with and without the use of such tools, the researchers found that the overall greenhouse gas emissions associated with a building could be reduced by 75 percent “without a reduction in the flexibility of the design process,” Ulm says.

Ulm compares it to indicators in a gym that provide feedback on how many calories are being burned at any point in an exercise regime, providing a constant incentive to improve results — without ever prescribing what exercises the person should do or how to do them.

While the program is currently designed to evaluate relatively simple single-family homes — which represent the vast majority of living spaces in the U.S. — the team hopes to expand it to be able to work on much bigger residential or commercial buildings as well.

At this point, the software the team designed is a standalone package, so “one of our tasks going forward is to actually transition to making it a plug-in to some of the software tools that are out there” for architectural design, says Kirchain.

While there are many software tools available to help with evaluating a building’s environmental impact, Kirchain says, “we don’t see a lot of architects using these tools.” But that’s partly because these tend to be too prescriptive, he says, pointing toward an optimal design and constricting the designer’s choices. “Our theory is that any designer doesn’t want to be told that this is how the design must be. Their role is to design without undue constraints,” he says.


Link:http://news.mit.edu/2018/software-tool-could-help-architects-design-efficient-buildings-0905

41
Australian researchers are using AI and mathematics to detect tiny changes that may precede the often-deadly events

In Northern India last week, seven members of a family were buried alive in their home by a mudslide caused by heavy rains. In July, a landslide at a jade mine in Myanmar killed 27. Early this year, debris flows in Southern California killed more than 20 people.

Landslides, mudslides, debris flows—all geologic hazards involving earth, mud or rocks moving quickly downhill—can happen almost anywhere there are slopes. As they occur suddenly and seemingly without warning, they’re often deadly. Though estimates vary, these events kill nearly 5,000 people a year.

But Australian researchers may have found a way to detect landslides as far as two weeks in advance, giving residents time to evacuate and engineers the opportunity to shore up slopes. Using AI and applied mathematics they’ve developed a software that can identify the subtle signs of an impending slide, signs that would be invisible to the naked eye.

“Right now, a lot of the predictions [about where landslides will happen] are based on someone’s gut instinct on the location,” says Antoinette Tordesillas, a professor at the School of Mathematics and Statistics at the University of Melbourne, who co-led the research. “We don’t rely on gut instinct. We want to develop an objective method here.”

To develop the software, Tordesillas and her team used radar data from mining companies, which produce extremely detailed information of the surface movement of slopes. The team took the data and looked for patterns, eventually figuring out which networks of movements indicated unstable locations. They also used data from a landslide-prone Italian volcano to help develop the algorithm.

“It’s a very big data set, and this is an effort that is basically like finding a needle in a haystack filled with needles,” Tordesillas says. “It’s not just finding any pattern, because there are so many patterns that come out in data on landslides. The challenge is finding the one pattern that can give you a clue on the location where this event is to happen in the future.”

The software can also incorporate data about other landslide risk factors, like rainfall and erosion, making the targeting even more precise. The data used for monitoring can come from radar based on the ground, on satellites or even in drones.

Tordesillas and her team hope the software will help some of the world’s most vulnerable populations.

“Landslides are a global problem,” she says. “But especially it’s become really prevalent in Third World countries in what’s called ‘garbage cities.’”

These garbage cities, Tordesillas explains, are landfills with populations of squatters who live amidst the trash, picking through it for things to sell or trade. Globally there are about 15 million people living in such conditions. Garbage cities are especially prone to landslides, and they’re inherently unstable.

“You’re talking about mounds of garbage comprising loosely bound solids, rubbish,” Tordesillas says. “The difference between those areas and a natural slope is that in a natural slope the geological material has had millennia to congeal and solidify to form a stable material.”

While landslides affecting wealthy areas like Southern California make headlines, garbage city landslides are often not even reported, as the squatter villages are illegal. Tordesillas hopes her team’s software could help give early warning to these residents.

“This sounds promising,” says Michael Hamburger, a professor of geophysics at Indiana University who studies landslides, of the technology.

Monitoring landslide-prone areas produces massive amounts of data, Hamburger says, and scientists need better ways of analyzing the data. Technologies like the University of Melbourne software stand to help make this analysis happen more quickly.

But only areas being monitored could be helped by the software, and most landslide-prone areas aren’t monitored at all, Hamburger says. “It’s a tiny percentage [that is monitored],” he says, “and over the world there are millions and millions of square miles, particularly in mountainous regions, and particularly in the developing world, that are systematically prone to landslides that are definitely not being monitored in any systematic way.”

Tordesillas hopes data can one day eventually be collected via small portable devices, perhaps even phones, making monitoring more accessible for more locations.

“We can then take that data and return within minutes a probability of a collapse happening,” she says.

The team also hopes to test their tools for structural health monitoring, predicting the collapse of structures like bridges or dams.

With recent collapses like the bridge in Genoa, Italy that killed more than 40 and the Florida bridge collapse that killed six still fresh in our collective mind, the world might want to cross its fingers for the technology’s success.




https://www.smithsonianmag.com/innovation/new-software-can-predict-landslides-weeks-before-they-happen-180970181/

42
Science and Information / Global Temperatures Highest in 4000 Years
« on: March 08, 2013, 06:09:05 PM »
Global temperatures are warmer than at any time in at least 4,000 years, scientists reported Thursday, and over the coming decades are likely to surpass levels not seen on the planet since before the last ice age.
 Previous research had extended back roughly 1,500 years, and suggested that the rapid temperature spike of the past century, believed to be a consequence of human activity, exceeded any warming episode during those years. The new work confirms that result while suggesting the modern warming is unique over a longer period.

Even if the temperature increase from human activity that is projected for later this century comes out on the low end of estimates, scientists said, the planet will be at least as warm as it was during the warmest periods of the modern geological era, known as the Holocene, and probably warmer than that.

That epoch began about 12,000 years ago, after changes in incoming sunshine caused vast ice sheets to melt across the Northern Hemisphere. Scientists believe the moderate climate of the Holocene set the stage for the rise of human civilization roughly 8,000 years ago and continues to sustain it by, for example, permitting a high level of food production.

In the new research, scheduled for publication on Friday in the journal Science, Shaun Marcott, an earth scientist at Oregon State University, and his colleagues compiled the most meticulous reconstruction yet of global temperatures over the past 11,300 years, virtually the entire Holocene. They used indicators like the distribution of microscopic, temperature-sensitive ocean creatures to determine past climate.

Like previous such efforts, the method gives only an approximation. Michael E. Mann, a researcher at Pennsylvania State University who is an expert in the relevant techniques but was not involved in the new research, said the authors had made conservative data choices in their analysis.

“It’s another important achievement and significant result as we continue to refine our knowledge and understanding of climate change,” Dr. Mann said.

Though the paper is the most complete reconstruction of global temperature, it is roughly consistent with previous work on a regional scale. It suggests that changes in the amount and distribution of incoming sunlight, caused by wobbles in the earth’s orbit, contributed to a sharp temperature rise in the early Holocene.

The climate then stabilized at relatively warm temperatures about 10,000 years ago, hitting a plateau that lasted for roughly 5,000 years, the paper shows. After that, shifts of incoming sunshine prompted a long, slow cooling trend.

The cooling was interrupted, at least in the Northern Hemisphere, by a fairly brief spike during the Middle Ages, known as the Medieval Warm Period. (It was then that the Vikings settled Greenland, dying out there when the climate cooled again.)

Scientists say that if natural factors were still governing the climate, the Northern Hemisphere would probably be destined to freeze over again in several thousand years. “We were on this downward slope, presumably going back toward another ice age,” Dr. Marcott said.

Instead, scientists believe the enormous increase in greenhouse gases caused by industrialization will almost certainly prevent that.

During the long climatic plateau of the early Holocene, global temperatures were roughly the same as those of today, at least within the uncertainty of the estimates, the new paper shows. This is consistent with a large body of past research focused on the Northern Hemisphere, which showed a distribution of ice and vegetation suggestive of a relatively warm climate.

The modern rise that has recreated the temperatures of 5,000 years ago is occurring at an exceedingly rapid clip on a geological time scale, appearing in graphs in the new paper as a sharp vertical spike. If the rise continues apace, early Holocene temperatures are likely to be surpassed within this century, Dr. Marcott said.

Dr. Mann pointed out that the early Holocene temperature increase was almost certainly slow, giving plants and creatures time to adjust. But he said the modern spike would probably threaten the survival of many species, in addition to putting severe stresses on human civilization.

“We and other living things can adapt to slower changes,” Dr. Mann said. “It’s the unprecedented speed with which we’re changing the climate that is so worrisome.”

source:http://www.nytimes.com

43
Data harvested by enterprise file sharing and cloud storage company Egnyte shows Apple's iOS, led by the iPhone, is growing its share of the enterprise market, while Google's Android user base continues to contract.

According to TechCrunch, Egnyte tracked which mobile operating systems were being used to access the firm's servers, with the sample comprising of 100,000 paying customers over the past year and a half.

While the report does not reflect an exhaustive rundown of the worldwide enterprise market — U.S. companies accounted for 80 percent of the data, with the remaining 20 percent attributed to European entities — it gives an overview of how iOS and Android are faring in the corporate marketplace.


OS marketshare for Q3/Q4 2011 (left) and 2012 full-year (right). | Source: Egnyte


Until recently, the share of smartphones and tablets running Google's mobile platform had been holding steady at 30 percent, while iOS devices accounted for nearly 70 percent of Egnyte's traffic. Over the third and fourth quarters of 2011, the iPhone and iPad took a 28 percent and 40 percent share of traffic, respectively. Interestingly, usage of Apple's iOS products flip-flopped during 2012, with the iPhone capturing a 42 percent share, while the iPad fell to 27 percent.

However, preliminary data for the first quarter of 2013 shows Android impressions fell to 22 percent as the iPhone and iPad gained ground, accounting for a respective 48 percent and 30 percent of tracked users.




Preliminary OS marketshare data for Q1 2013

Speaking to the results, Egnyte told TechCrunch that smartphones are likely being used for many business-oriented tasks like checking and responding to corporate email. Another factor could be the current limitations of tablets, which have yet to completely replicate the usability of enterprise laptops.

"Apple seems to have at least temporarily won the hearts and minds of business users with its products accounting for about 70 percent of our traffic," the company said. "This is important because it’s a flip-flop from the days of old, where Apple products were rarely seen in the corporate landscape. It’s also an indication that when BYOD wrested control over what devices consumers used from IT, they overwhelmingly chose an easy to use product that focused on UI and usability, perhaps even at times over depth."

Thursday's report comes on the heels of another study conducted by mobile device, app and data security firm Good Technology, which found iOS devices accounted for 77 percent of all enterprise activations across its network for the fourth quarter of 2012. Over the same period, Android dropped from 29 percent of activations to 22.7 percent.

44
Science and Information / Computer software reconstructs ancient languages
« on: February 13, 2013, 01:32:18 PM »

Thanks to modern technology a new tool has been developed that can reconstruct long-dead languages. The British Broadcasting Corporation (BBC) reported on Feb. 12 that a team of researchers have developed software that can rebuild “protolanguages” - the ancient tongues from which our modern languages evolved.

The new computerized system has produced a list of what the ancestor words of 637 Austronesian languages (spoken in Indonesia, Madagascar, the Philippines, Papua New Guinea, Malaysia and elsewhere) would have sounded like. In more than 85 percent of cases, the automated reconstruction came within one “character” or sound of the ancestor word commonly accepted as true by linguists.

This type of labor-intensive work is generally carried out by linguists. According to the BBC, Dan Klein, an associate professor at the University of California, Berkeley, said: "It's very time consuming for humans to look at all the data. There are thousands of languages in the world, with thousands of words each, not to mention all of those languages' ancestors.

"It would take hundreds of lifetimes to pore over all those languages, cross-referencing all the different changes that happened across such an expanse of space - and of time. But this is where computers shine."

Over thousands of years, tiny variations in the way that we produce sounds have meant that early languages have morphed into many different descendents.

"The trick is to identify these patterns of change and then to 'reverse' them, basically evolving words backwards in time."

The Huffington Post reports that in the past, linguists used the “comparative method,” examining by hand, the same word in two or more languages, while attempting to reconstruct the parent language from which the languages may have descended.

From a database of 142,000 words, the system was able to recreate the early language from which these modern tongues derived. The scientists believe it would have been spoken about 7,000 years ago.

Although researchers are able to reconstruct languages that date back thousands of years, a question remains as to the possibility of going even further back to recreate the very first protolanguage from which all others evolved.

Languages of the Bible

The recent language research brings to mind the possibility of getting back to original languages of the Bible. In the ChristianCourier.com Wayne Jackson discusses the three original languages of the Bible: Hebrew, Aramaic, and Greek, recognizing that words serve as the medium of God’s special revelation to humanity.

Jackson notes that the Hebrew of the Old Testament is a Semitic language that is closely related Aramaic. Both are a part of ancient tongues employed mainly in Syria, Lebanon, and Israel. It is believed that Hebrew came from the Canaanite language, sometimes called the “language of Canaan” and the “Jews’ language” in the Old Testament. In the New Testament, it is called “Hebrew.”

Aramaic is a close cognate language or a dialect of Hebrew. Though Hebrew remained the “sacred” tongue of the Jews, they, like others in the Middle East, began using vernacular Aramaic for everyday conversation and writing sometime after the sixth century B.C. In the first century A.D., Aramaic, in one dialect or another, was the common daily tongue of the Palestinian Jews. It is believed that Jesus Christ spoke Aramaic, in light of a number of Aramaic expressions that are transliterated into Greek in the Gospels. In the New Testament epistles, several Aramaic words can also be found.

The Greek language has passed through several major periods of change, as the New Testament was composed in Koine, that is, universal or “common Greek.” Koine was the normal street language in Rome, Alexandria, Athens, and Jerusalem. When the Romans finally conquered the Greeks, it was the Greek influence that flowed throughout the empire.

Those who study the Scriptures are particularly intrigued by the recent developments in linguistic research. The closing stanza of a poem written about Aramaic, “The Holy Language”—speaks to those who recognize the significance of the research regarding protolanguages and the attempts to get back to the original language:

    As students of the Holy Language,

    We place our ears near to the lips of God

    And set our eyes to read and see more clearly

    His Will and Word revealed.

[Source: http://www.examiner.com]

45
Faculty Sections / Google unveils Nexus 4 phone wireless charger
« on: February 13, 2013, 12:53:33 PM »

When it comes to charging the Nexus 4, Google has a new way for users to do so in style.

The Mountain View, Calif., company began selling a $59.99 wireless charger for its flagship phone this week that also doubles as a stand.

The charger looks like a tilted, black hemisphere. In the back, the charger has an outlet port, and at the top of the device is a charging pad, which is labeled with the "Nexus" brand.


"Its angled surface provides easy visibility of your phone while charging," the device description reads.

When a user lays a Nexus 4 phone face up on the device, it will fully recharge the phone in about four hours. And because the device uses a Qi inductive charger, it'll also work with other popular phones that share the technology, including the HTC Droid DNA and Nokia Lumia 920.

The charger can be purchased now on Google Play and will ship within a week.

Pages: 1 2 [3] 4