Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - azizur.bba

Pages: [1] 2 3
1
OnurGüntürkün left his native Turkey decades before he learned of whistled Turkish—a means of communication practiced in a remote region of the country. Güntürkün, a cognitive neuroscientist at Ruhr University Bochum in Germany, was on sabbatical in Australia a few years ago when a colleague there mentioned that he had visited these villages where whistling is used to converse over long distances. “From the first moment, it was clear to me this was what I needed to conduct critical experiments,” he says.
Güntürkün studies the asymmetries that exist in the brain, and the dogma in his field, he explains, has been that the left hemisphere is dominant in processing language. The right hemisphere plays a smaller role, primarily involving the interpretation of the prosodic aspect of language: the intonations and stress on certain words that can change the meaning of a sentence and add emotion or emphasis.
“We assumed that the hemispheric asymmetry that we have in language processing is crafted in such a way that the left hemisphere doesn’t care about the physical structure of language,” he says. So whether it’s a tonal language that relies on pitch, one that employs clicks, or sign language, the brain still knows it’s language and the left hemisphere carries the processing load.
Whistled language, however, is something altogether different, Güntürkün says. The sounds of whistled Turkish recapitulate spoken syllables as closely as possible, but cannot duplicate every aspect of speech; the slower, more melody-like changes in acoustic signal characteristic of the whistled language are just what the right hemisphere is primed to process. “Therefore, whistled language is a perfect experiment of nature. It’s a full-blown language but delivered with a physical structure for which the right hemisphere is dominant.” Güntürkün wanted to see whether the asymmetric processing of language would hold up among whistled-Turkish speakers, so he and his wife headed for the hills of Kusköy in the northeast of the country.
They made friends with locals who helped recruit whistled-Turkish speakers for the study. Because the Güntürküns visited during Ramadan, the cafés in town made for quiet daytime places to conduct experiments. The setup they used is a tried-and-true method of assessing lateralization: through headphones each ear receives a different sound simultaneously (for example, bah in one ear andtah in the other), and the listener is asked to say which one he perceived. If the sound perceived came through the right ear, that would mean the left hemisphere dominated the sound processing, and vice versa.
When study participants were played syllables of spoken Turkish, the brain lateralization was as expected: most of the time, the volunteers perceived the sound that came into the right ear. But when they heard bits of whistled Turkish, the asymmetry disappeared—neither hemisphere seemed to dominate (Current Biology, 25:R706–08, 2015). “There is a certain physical form of language, in this case whistles, where the dominance of the left hemisphere is broken,” Güntürkün says.
Martin Meyer, who studies speech processing at ETH Zürich, cautioned against overinterpreting the results. For one, he asks, are short notes of whistled Turkish actually perceived as syllables in the same way one would discern discrete bits of speech? “This is not mentioned in the paper, whether there is a tight correspondence between syllables in spoken language and syllables in whistled language.” If not, then the symmetry observed is what one would expect to see if a person simply listened to whistled notes.
Julien Meyer (no relation to Martin), a researcher at CNRS in France who has spent years documenting whistled languages, says his own work shows that such languages (there are quite a few in addition to Turkish) do indeed reflect the phonology—the basic sound elements—of spoken language. “Some people try to say it’s a simpler language,” Meyer says. But the phonological references “you have in your brain . . . these are the same references.”
However, Julien Meyer says he’d like to temper the idea that whistled languages are fully perceived as slow modifications in sound. In a study of whistled Turkish, he found that speakers are picking up on formants, characteristic acoustic properties of speech sounds in the mouth. Formants, he says, are rapid parts of speech analyzed by the brain. “Whistled consonant-vowel transitions are as quick as spoken ones.”
One way to better assess what participants perceive, suggests Martin Meyer, is to play them longer segments of speech, say, phrases, and record brain activity with an electroencephalogram (EEG) recorder or fMRI. Güntürkün says he had to make do with a low-tech solution given Kusköy’s rugged terrain and remoteness. Such geographical features are typical of regions where whistled languages are often employed to communicate over long distances. In his book, Whistled Languages, Julien Meyer lists those used around the world: from Kickapoo and Siberian Yupik in North America to PunanBusang in Malaysia to a handful of languages in Brazil and two dozen whistled languages in Africa.
Not surprisingly, whistled languages are threatened by the ubiquity of cell phones. Güntürkün says the number of whistled-Turkish speakers has dropped considerably in the last few decades, and just about no women use it anymore. Martin Meyer says such “exotic languages” are valuable tools in learning more about how the human brain processes language, a field of study that has focused on English, German, and other common languages. “The diversity of language worldwide is really massive,” he says. “But there’s no comparable diversity of brains. The hardware is more or less the same. So the brains of humans all over the world are able to master all those diverse languages. [How they do] is one of the really interesting questions.”

2
In order to work well together, people must understand what others are thinking and feeling. New research suggests that we are attracted to those whose emotions we feel we can easily understand. And our confidence in that emotional understanding is reflected in the brain’s reward system, according to a study led by Silke Anders of the University of Lübeck, Germany, and colleagues, which was published today (April 4) in PNAS.
“This is fascinating, because it indicates that one of the reasons we find someone attractive could be related to our ability to put ourselves into their shoes,” Martin Hebart, a neuroscientist at the US National Institute of Mental Health who has collaborated with some of the study’s authors but was not involved in the research, wrote in an email.
But that doesn’t mean the opposite is true. “Just because you find someone attractive doesn’t mean you can read their emotions well,” Hebart added.
Previous studies have shown that encountering someone we find attractive activates our brain’s reward system. But until now, most of the research has focused on physical attraction—neglecting the role of social interaction.
The researchers found evidence to suggest that a person’s ability to understand another’s emotions may reflect the amount of overlap between what they call the “neural vocabulary” of the two individuals—the common emotional language they share. “If there is a large overlap between the sender’s and the perceiver’s ‘neural vocabulary,’ then the perceiver should be able to decode the sender’s signals more easily than in cases where there is less overlap,” Anders explained.
To investigate this hypothesis, Anders and her colleagues performed two experiments. In the first experiment, 21 women and 19 men were shown videos of six different women acting either fearful or sad. The volunteers then had to identify which of these two emotions the subject was expressing, and how confident they were in their assessment.
Immediately before and after each video trial, the participants were instructed to enlarge a picture of the woman in the video until it resembled a comfortable distance for conversation—an indicator of how approachable they found her. The participants were also asked how much they agreed with several statements about the woman in each video, including: “I would like to meet her in real life,” “I feel that she would understand me,” and, “I think I could discuss personal problems with her.”
The second experiment, which involved 28 women and 24 men, was similar to the first, except that the participants’ brain activity was measured using functional magnetic resonance imaging (fMRI) while they were watching the women in the videos express emotion. In addition, the observers were later instructed, like the women in the videos, to try to experience and show fear and sadness themselves.
The volunteers in both experiments were able to accurately identify the emotions expressed by the women in the majority of the videos. The more confident the study participants indicated they were in understanding the emotion of the subject in the video, the more attractive they found her, as signaled by how much they resized the image of her and how they responded to the questions about her.
And this confidence in their emotional comprehension showed up in their brain activity. Changes in attraction between observers were linked to activation of the ventral striatum and medial orbitofrontal cortex (mOFC)—part of the brain’s reward circuitry. These responses were specific to pairs of observers and subjects: in other words, they weren’t merely a result of how generally attractive or expressive the women in the videos were.
In addition, similarity between neural activity recorded while the participants were watching the women experience an emotion and that due to experiencing the emotion him or herself was also tied to greater activity in the mOFC, the researchers reported.
The findings “might relate more generally to a well established literature suggesting that we are attracted to others not just by virtue of their physical attractiveness, but also by the extent to which we perceive similarities between ourselves and that other person,” psychologist John O’Doherty, the director of the Caltech Brain Imaging Center who was not involved in the work, told The Scientist in an email.
But the study has some notable limitations, according to Michael Spezio, a neuroscientist at Scripps College who also was not involved with the research. For one, the study “is primarily not about interpersonal attraction, but about willingness to approach a person,” and did not control for the role of physical attraction, Spezio wrote in an email.
“Second, the paper refers to a single reward system in the brain.,” he added. “Yet the study did not directly measure this for the participants involved.”
The research was intentionally limited to investigating the response to emotions expressed by women only. (Previous studies have suggested women express emotions more accurately than men, the researchers noted.)
Additionally, while the researchers made every attempt to elicit emotions that were as natural as possible, the emotions were somewhat artificial. “This is still not natural communication, and we are certainly still missing some aspects future studies will reveal,” Anders said.
It remains an open question whether the brain signals represent only a reward for reading someone’s emotions, or true learning—which would suggest that we can improve our emotional understanding by interacting with someone. “This is a question we are currently investigating,” Anders added.

3
Each of us has a unique neural connectome, a network within and between different brain regions that is preserved both at rest and during a range of activities. Now, scientists from the University of Oxford, U.K., and their colleagues have shown that this map of the brain’s activity at rest provides information on what that brain looks like when performing a task. The team’s results were published today (April 7) in Science.
“We show that we can essentially predict how people will use their brains based on [their resting brain activities],” said study coauthor SaadJbabdi of Oxford.
Many neuroimaging studies pool data across individuals to identify brain activities on average. Jbabdi and his colleagues instead analyzed individual brain activity data from 98 Human ConnectomeProjectparticipants. The brains of these individuals were previously scanned via functional magnetic resonance imaging (fMRI) while the participants were at rest and while they were performing 47 different tasks (including social interactions and memory-related tasks). Jbabdi’s team also analyzed images of the architecture of the individuals’ brains mapped using MRI and diffusion-weighted MRI, a technique that maps the flow of water molecules throughout the brain.
Individual brains differ in shape, architecture, and connectomes, impacting how different parts of the brain communicate. A long-standing question in neurobiology is to what extent brain architecture and/or neural connections underlie behavioral differences observed among people.
Jbabdi’s team reasoned that a person’s resting brain activity—measured indirectly, using fMRI—could be used to predict personalized responses to different tasks.
Integrating the various neuroimaging data types, the team built a model that accurately predicted individual brain responses to 46 of the 47 tasks assessed. (Because the model focused on cortical activity, it was less able to predict a person’s brain activity when performing a gambling task, which involves subcortical brain regions, explained study coauthor IdoTavor, a postdoctoral fellow in Jbabdi’s lab.)
The Oxford team is not the first to apply such an approach. After showing that a person’s brain connectivity could be used to predict neuronal activation in his or her brain during a given activity, a group at MIT last year showed in Cerebral Cortex that measuring structural neuronal connectivity using diffusion-weighted MRI could help researchers predict fMRI responses to four different types of visual stimuli in 26 people. Both the Oxford and MIT groups used similar modeling approaches to predict task-specific brain activity from structural and connectivity data.
“It is a longstanding idea that connectivity and function in the brain go together,” explained Nancy Kanwisher, a professor of cognitive neuroscience at MIT and a coauthor on the 2015 Cerebral Cortex study who was not involved in the work of Jbabdi and colleagues. “But it is only recently that the work of ZeynepSaygin and [that presented in] this new Science paper have shown this tight link between connectivity and function all over the brain.”
Jbabdi said he and his colleagues were not aware of Kanwisher and colleagues’ 2015 study and therefore failed to cite it. He has since contacted Science, requesting to add the citation, he told The Scientist.
“The Science paper extends the [prior publication] in important ways,” said Michael Cole, a neuroscientist at Rutgers University in New Jersey. “There are many more tasks included [in this latest study], and that is not trivial because the regions in the brain where more-complicated cognitive tasks are processed are not well understood.”
The results of both studies suggest it may be possible to use an easy-to-generate image of an individual’s resting brain to infer how parts of the brain function in people who cannot perform certain tasks. “¬The ability to predict cognitive task activity in individuals could be really important in clinical applications,” said Cole.
Jbabdi’s group is now testing whether its model can be extended from healthy individuals to those with certain disorders known to affect brain function. “Our brains have elaborate networks,” said Jbabdi. “We can now learn what these networks are for every individual.”

4
Each of us has a unique neural connectome, a network within and between different brain regions that is preserved both at rest and during a range of activities. Now, scientists from the University of Oxford, U.K., and their colleagues have shown that this map of the brain’s activity at rest provides information on what that brain looks like when performing a task. The team’s results were published today (April 7) in Science.
“We show that we can essentially predict how people will use their brains based on [their resting brain activities],” said study coauthor SaadJbabdi of Oxford.
Many neuroimaging studies pool data across individuals to identify brain activities on average. Jbabdi and his colleagues instead analyzed individual brain activity data from 98 Human ConnectomeProjectparticipants. The brains of these individuals were previously scanned via functional magnetic resonance imaging (fMRI) while the participants were at rest and while they were performing 47 different tasks (including social interactions and memory-related tasks). Jbabdi’s team also analyzed images of the architecture of the individuals’ brains mapped using MRI and diffusion-weighted MRI, a technique that maps the flow of water molecules throughout the brain.
Individual brains differ in shape, architecture, and connectomes, impacting how different parts of the brain communicate. A long-standing question in neurobiology is to what extent brain architecture and/or neural connections underlie behavioral differences observed among people.
Jbabdi’s team reasoned that a person’s resting brain activity—measured indirectly, using fMRI—could be used to predict personalized responses to different tasks.
Integrating the various neuroimaging data types, the team built a model that accurately predicted individual brain responses to 46 of the 47 tasks assessed. (Because the model focused on cortical activity, it was less able to predict a person’s brain activity when performing a gambling task, which involves subcortical brain regions, explained study coauthor IdoTavor, a postdoctoral fellow in Jbabdi’s lab.)
The Oxford team is not the first to apply such an approach. After showing that a person’s brain connectivity could be used to predict neuronal activation in his or her brain during a given activity, a group at MIT last year showed in Cerebral Cortex that measuring structural neuronal connectivity using diffusion-weighted MRI could help researchers predict fMRI responses to four different types of visual stimuli in 26 people. Both the Oxford and MIT groups used similar modeling approaches to predict task-specific brain activity from structural and connectivity data.
“It is a longstanding idea that connectivity and function in the brain go together,” explained Nancy Kanwisher, a professor of cognitive neuroscience at MIT and a coauthor on the 2015 Cerebral Cortex study who was not involved in the work of Jbabdi and colleagues. “But it is only recently that the work of ZeynepSaygin and [that presented in] this new Science paper have shown this tight link between connectivity and function all over the brain.”
Jbabdi said he and his colleagues were not aware of Kanwisher and colleagues’ 2015 study and therefore failed to cite it. He has since contacted Science, requesting to add the citation, he told The Scientist.
“The Science paper extends the [prior publication] in important ways,” said Michael Cole, a neuroscientist at Rutgers University in New Jersey. “There are many more tasks included [in this latest study], and that is not trivial because the regions in the brain where more-complicated cognitive tasks are processed are not well understood.”
The results of both studies suggest it may be possible to use an easy-to-generate image of an individual’s resting brain to infer how parts of the brain function in people who cannot perform certain tasks. “¬The ability to predict cognitive task activity in individuals could be really important in clinical applications,” said Cole.
Jbabdi’s group is now testing whether its model can be extended from healthy individuals to those with certain disorders known to affect brain function. “Our brains have elaborate networks,” said Jbabdi. “We can now learn what these networks are for every individual.”

5
Daffodil Institute of Languages (DIL) / Thoughts Derailed
« on: April 21, 2017, 09:06:26 PM »
The same brain mechanism by which surprising events interrupt movements may also be involved in disrupting cognition, according to a study.
By Tanya Lewis | April 18, 2016
Scientists may have an explanation for why startling events like a phone ringing can disrupt one’s train of thought. By recording the brain activities of healthy people and patients with Parkinson’s disease, researchers at the University of California, San Diego (UCSD), found that the brain mechanism involved in stopping body movements is activated when memory is disrupted, according to a study published yesterday (April 18) in Nature Communications.
“The radically new idea is that just as the brain's stopping mechanism is involved in stopping what we're doing with our bodies it might also be responsible for interrupting and flushing out our thoughts,” study coauthor Adam Aron of UCSD said in a statement.
To test this hypothesis, study coauthor Jan Wessel, a postdoc in Aron’s lab, and colleagues measured EEG signals from the scalps of 20 healthy participants, as well as signals from electrodes implanted in the brains of seven people with Parkinson’s disease. The participants were presented with a string of letters to remember, but before they had to recall them, they heard a simple tone. In a handful of trials, the participants heard a snippet of birdsong instead of the tone.
As expected, the participants had worse recall after hearing the birdsong, compared with hearing the simple tone. And hearing the surprising sound also triggered increased activity in a region deep in the basal ganglia called the subthalamic nucleus (STN), part of a brain network previously shown to be involved in stopping bodily movements. This type of brain activity also explains why Parkinson’s patients sometimes freeze or can have trouble changing focus.
The mechanism for losing one’s train of thought might have evolved as an adaptation to threats in the environment, such as encountering a lion in the bushes, the researchers speculated.
 “It might also be potentially interesting to see if this system could be engaged deliberately,” Wessel said in the statement, “and actively used to interrupt intrusive thoughts or unwanted memories.”

6
You need to know / Don’t Ask, Don’t Hell
« on: April 21, 2017, 09:05:30 PM »
So, where do most evangelicals stand on the issue of hell? Sprinkle and Date suggest that it is difficult to know, since people are reluctant to publicly challenge traditional views. 
“We have a very fear-driven evangelical culture where if you don't toe the line, you get kind of shunned,” says Sprinkle. “It's really kind of scary.” 
Still, the debate over hell shows no sign of dissipating among evangelical scholars. If anything, the scope of the discussion appears to be expanding. Sprinkle, who recently co-edited a book, Four Views on Hell, raised theological eyebrows when he included an essay by theologian Robin Parry defending universalism—the view that all people will eventually be saved. It’s a doctrine that evangelicals, including annihilationists, widely view as incompatible with their religious teachings. 
But, “the landscape has changed,” opines one writer at the Christian Post. “After reading Parry’s essay, you still may not be convinced that he is right. But it’s no longer enough to simply state categorically that an evangelical can’t be a universalist!” 
For his part, Mark Galli believes that many evangelicals will choose to accept that hell is a paradox that can never be fully understood. 
“When it comes to heaven and hell, if God had wanted us to know definitively one way or the other, he would've made himself more clear,” he says. “But he left just tantalizing hints about what might happen. One can move forward, happily, and live with that mystery.”

7
You need to know / The Campaign to Eliminate Hell
« on: April 21, 2017, 09:05:08 PM »
A new generation of evangelical scholars are challenging the idea that sinners are doomed to eternal torment—but traditionalists are pushing back.

Hell isn’t as popular as it used to be. 
Over the last 20 years, the number of Americans who believe in the fiery down under has dropped from 71 percent to 58 percent. Heaven, by contrast, fares much better and, among Christians, remains an almost universally accepted concept. 
Underlying these statistics is a conundrum that continues to tug at the conscience of some Christians, who find it difficult to reconcile the existence of a just, loving God with a doctrine that dooms billions of people to eternal punishment. 
"Everlasting torment is intolerable from a moral point of view because it makes God into a bloodthirsty monster who maintains an everlasting Auschwitz for victims whom he does not even allow to die," wrote the lateClarkPinnock, an influential evangelical theologian.   
While religious philosophers have argued over the true nature of hell since the earliest days of Christianity, the debate has become especially pronounced in recent decades among the millions of Americans who identify themselves as evangelicals. The once taboo topic is being openly discussed as well-regarded scholars publish articles and best-selling books that rely on careful readings of Scripture to challenge traditional views.
“What if the muting of hell is due neither to emotional weakness nor loss of Gospel commitment?” writes Edward Fudge, whose 1982 book, The Fire That Consumes, is widely regarded as the scholarly work that jump-started the current debate. “What if the biblical foundations thought to endorse unending conscious torment are less secure than has been widely supposed?” 
Fudge is among those who endorse an alternative doctrine, known as “annihilationism” or “conditional immortality,” which holds that, after death, sinners simply cease to exist, while those who are saved enjoy eternal life under God’s grace. Although it’s not a positive outcome for the wicked—in fact, it amounts to spiritual capital punishment—it’s deemed a far more merciful and just fate than an eternity of torture. 
Traditionalists are pushing back at this doctrine, which they view as heresy born out of misguided sentimentality. But, annihilationists believe they have already made significant inroads within the evangelical community.   
“My prediction is that, even within conservative evangelical circles, the annihilation view of hell will be the dominant view in 10 or 15 years,” saysPreston Sprinkle, who co-authored the book Erasing Hell, which, in 2011, debuted at number three on the New York Times bestseller list. “I base that on how many well-known pastors secretly hold that view. I think that we are at a time and place when there is a growing suspicion of adopting tradition for the sake of tradition.” 
In the Beginning 
In its earliest years, Christianity didn’t have a consensus on the nature of hell. Origen Adamantius, a third-century theologian, believed the wicked were punished after death, but only long enough for their souls to repent and be restored to their original state of purity. This doctrine, known as universalism, envisioned that everyone—including Satan—would eventually be redeemed and reunited with God. 
Contemporary theologians generally credit Irenaeus of Lyons, a second-century bishop, as the intellectual forefather of annihilationism. In his seminal five-volume work, Against Heresies, he emphasized that the soul is not inherently immortal—eternal life would be bestowed upon the good with the resurrection of Christ, while the wicked would be left to die and fade from existence. “It is the Father who imparts continuance forever on those who are saved,” Irenaeus wrote.
But it was Augustine of Hippo and his book, City of God, published in A.D. 426, that set the tone for official doctrine over the next 1,500 years. Hell existed not to reform or deter sinners, he argued. Its primary purpose was to satisfy the demands of justice. Augustine believed in the literal existence of a lake of fire, where “by a miracle of their most omnipotent Creator, [the damned] can burn without being consumed, and suffer without dying.” 
In theological circles this doctrine is known as Eternal Conscious Torment (ECT). Critics fault it for its lack of proportion. Why would a loving God punish a single lifetime of sin with endless lifetimes of torture? And, among sinners, does an adulterer merit the same punishment as a murderer? And what about the billions of people whose only sin was to follow a different faith? 
“I question whether 'eternal conscious torment' is compatible with the biblical revelation of divine justice,” wrote John Stott, the Anglican clergyman and world-renowned evangelical leader who died in 2011. “Fundamental to it is the belief that God will judge people 'according to what they [have] done' (e.g. Revelation 20:12), which implies that the penalty inflicted will be commensurate with the evil done.” 
But, across the centuries, defenders of ECT have emphasized that sin is not something that can be measured by how it affects others. The only relevant issue is that it’s a rebellion against God.   
Some religious scholars point to examples throughout the Bible that illustrate how even “little sins” merit harsh penalties. Lot’s wife, for instance, did nothing more than glance in the wrong direction—but because she directly disobeyed God, she became a pillar of salt.   
“If people lied to us, disobeyed us, or spoke against us, would they be worthy of death?” writes theologian Robert Peterson, a prominent critic of annihilationism. “Of course not. If they do these things against God, do they deserve capital punishment? The Bible's consistent answer is yes.” 
Mark Galli, the editor of Christianity Today, points to Psalm 51, where David expresses remorse for adultery and his complicity in murder. “And yet he says in the middle of that Psalm, ‘Against you and you only have I sinned, Oh Lord,’” says Galli. “I think that's the other dimension. We realize there's something else we've violated here. That something else is a moral code that transcends us. And that moral code, of course, is written by God.” 
It Is Written 
Preston Sprinkle recalls, with embarrassment, his younger days in seminary, when he first heard that the evangelical leader John Stott was an annihilationist.   
“I remember hearing that thinking, you can't be a Christian and believe that,” he says. “I was just reciting, like a parrot, the evangelical narrative regarding anybody who doesn't toe the line. But, back six years ago, when I truly revisited the question of hell, I was kind of shocked at how little biblical support there was for the traditional view.”
Advocates for annihilationism (or, “conditionalism” as some prefer) emphasize they are not guided by sentimentality, but are engaging in a careful exegesis of Scripture that has long been discouraged by orthodoxy. Nor do they claim to advocate for a version of hell that represents a soft view on sin or a low view of God. 
“The fate that we conditionalists suggest awaits those who obstinately reject Christ is a fearful one,” says Chris Date, an independent theologian who runs a website, Rethinking Hell, and who helps organize an annual conference on the topic. “There is no greater human fear than death. We fight tooth and nail to preserve our lives at all costs. But death isn't unbelievable and archaic the way that eternal torment is to many,” Date says.   
“The Bible says the wages of sin is death, that death of life is the ultimate end of those who don't embrace Jesus,” says Sprinkle.  “It seems to be a pretty dominant narrative in the Scripture.” 
But traditionalists remain steadfast in their belief that ECT is a pillar of evangelical faith, and some worry that weakening it threatens to bring down the entire edifice. 
“We need a fresh wave of great awakeners—those who will unapologetically preach hell fire in today's dire end times,” writes John Burton, a pastor and speaker. “To the shame of much of today's church there has been a firm and steadfast rejection of any truth that doesn't result in people feeling happy affection for God.” The narcissistic belief that God loves us so much that he couldn’t bear inflicting eternal punishment, Burton argues, encourages evil to expand unchecked. 
A traditionalist view of hell, however, does not necessarily mean fire and brimstone. “I certainly wouldn't agree that hell is a place of literal fire or torment,” says Galli. “I tend to be more favorable toward the metaphors that talk about hell as the absence of a love of God and that would be a miserable existence.” 
Galli describes himself as, “in some respects” a traditionalist. “That is to say, what keeps me in the traditional camp is the teaching of Jesus,” he says. “If it was left up to me, I would probably eliminate hell from our vocabulary because it does present seemingly insurmountable problems. But Jesus does talk about it as a reality and he doesn't seem to have any doubts about it.” 
The “seemingly insurmountable problems” include paradoxes that defy simple resolution. “One of the main problems with the doctrine of hell is that it’s a place where God is not present,” Galli notes. “Well, the fact is that God is omnipresent. How can you have a place that's bereft of God and yet it exists for eternity? That's kind of a theological impossibility.” 

8
You need to know / Special Delivery
« on: April 21, 2017, 09:04:30 PM »
Neurons in new brains and old
It’s hard to wrap one’s mind around the human brain. With its 86 billion neurons, even greater numbers of glial cells, a quadrillion synapses, and millions of miles of axons, this intricate organ doesn’t readily reveal its inner workings. But its complexity hasn’t kept researchers from striving to sort out the details of the brain’s form and functions.
Fascination with the brain is age-old. Gross anatomical dissections, starting in about 280 BCE in Alexandria and (after more than a thousand years of prohibition) resuming in the Renaissance, segued into microscopic examination. Over the centuries, structural drawings by Camillo Golgi and Santiago Ramón y Cajal have morphed into diagrams of the connectome and super-resolution images and videos of neurons in action. Continuous development of new techniques now allows neuroscientists to probe ever deeper into how the brain works.
During the long history of neurobiology, dogmatic beliefs about the brain have arisen, only to be toppled by new findings. In our annual issue dedicated to neuroscience, two features describe such dogma-busting research. Neuroscientist Margaret McCarthy debunks the idea that male and female brains differ only in brain areas related to reproduction in “Sex Differences in the Brain.” Certain areas of male and female noodles differ significantly from fetal development right on through adolescence and into adulthood, and McCarthy explains that it’s not just the neuronal connections; the behavior of glial cells also differs between the sexes. The upshot is that research on brain function must include female as well as male subjects to fully understand the importance of such differences.
Another long-held belief that has bitten the dust in the last few decades is that adult human brains do not generate new neurons. True, most of our lifetime supply of neurons is produced before birth; they proliferate, in fact, so overexuberantly in the fetal brain that half of them die before we are born. As people age they continue to lose neurons, albeit at a far slower rate. But even as the adult brain loses neurons, we now know, it also gains new ones. In “Brain Gain,” Senior Editor JefAkst reports on the role played by these new neurons, which are especially prominent in the hippocampus, a brain region vital to learning and memory. As one investigator puts it: “We think that [adult] neurogene¬sis provides a way, a mechanism of living in the moment. . . . It clears out old memories and helps form new memories.” The article gives a sense of the excitement felt by researchers eager to understand what these new neurons do and to possibly harness neurogenesis to ameliorate psychiatric disorders and neurodegenerative diseases.
That the brain’s glial cells—astrocytes, oligodendrocytes, and microglia—are no more than a support system for neurons is another belief that has also been upended. Glia myelinate axons, prune synapses, and perform valuable immune functions. A Lab Tools article, “Into the Limelight,” catalogs new techniques for the isolation and culture of glial cells, permitting gene-profiling studies, and methods for in vivo monitoring of astrocyte signaling. The article also touches on gliobiologists’ use of simpler model systems, such as flies, worms, and zebrafish, that have been so useful in studying neurons.
Other newly reported techniques can be found in “Holding Neurons Steady,” “Negative Thinking,” and “Brain Freeze.”
Handedness and language processing have long been thought to share a genetic basis because they are both highly lateralized in the brain. One Notebook article describes a study of multiple generations of 37 families of Dutch lefties that casts doubt on any genetic overlap, while another parses how the brain processes whistled language, in this case across-the-valley communications in a remote region of Turkey.
The brain still has many surprises to reveal. As more old, entrenched ideas about how the brain works are pruned away, a fuller understanding of this marvelous organ is bound to emerge.

9
You need to know / Shooting for the Moon
« on: April 21, 2017, 09:04:00 PM »
Defeating cancer is many times more difficult than planting a flag on our lunar satellite.

Last year, Vice President Biden said that with a new moonshot, America can cure cancer . . . .Let’s make America the country that cures cancer once and for all.
—President Barak Obama, State of the Union address, January 2016

The time has come in America when the same kind of concentrated effort that split the atom and took man to the moon should be turned toward conquering this dread disease. Let us make a total national commitment to achieve this goal.
—President Richard Nixon, signing into law the National Cancer Act of 1971, December 1970

In the decades separating these statements, such exhortations and promises didn’t come only from politicians; many scientists have also claimed that eradicating cancer was within reach. Beginning in February 2003, Andrew von Eschenbach, who served as director of the National Cancer Institute from 2002 to 2006, repeatedly named 2015 as the year by which he believed the disease could be vanquished.
So, in 2016, what’s the right degree of optimism to adopt? Treating cancer is enormously complicated by the fact that it is not a disease with just one root cause. It’s not even a single disease. But as massive databases accrue ever-more genomic information on myriad tumor types, cancer’s molecular underpinnings are coming into focus and beginning to guide the design of tailored therapies. Still, many conundrums remain.
This issue of The Scientist looks at progress in solving a number of those. Otto Warburg knew almost 100 years ago that tumor cells metabolize glucose differently than healthy ones. In “A Different Way of Doing Things,”  KivançBirsoy and David Sabatini dive into the diverse ways that cancer cells reprogram their own metabolism to fuel their rapid proliferation. New findings about these altered metabolic pathways are pointing to novel therapeutic possibilities for stunting cancer’s growth.
An appreciation for the importance of a tumor’s surrounding environment has also been growing for some time now. Solid and fluid pressures exerted by and on tumors influence proliferating cancer cells starved for nutrients and oxygen, and affect inflammation, metastasis, and drug delivery. Lance Munn and Rakesh Jain describe the interplay between cancer cells and components of the extracellular matrix (ECM) in which a tumor resides (“The Forces of Cancer,”). Targeting ECM components such as collagen and hyaluronan to alter physical stress on tumors could aid the delivery of antitumor drugs.
Metastasized cancers cause more than 90 percent of cancer deaths, but how that process occurs is still largely mysterious. An expanded version of The Literature section examines a number of recent reports about how exosomes released by primary tumor cells play a special role in preparing the metastatic site before the arrival of the primary tumor cells themselves. “They’re terraforming the envi¬ronment to make it hospitable,” is how one researcher describes it. Exosomes are also possible targets for reducing cancer’s spread.
A relatively new conundrum in cancer research, and a complex one at that, is the role of the microbiome in both spurring and protecting against tumorous growth. Kate Yandell reports on how alterations to the gut microbiome and attendant changes in immune-modulated inflammation have been implicated in the progression of cancer and in the action of many anticancer therapies (“Microbes Meet Cancer”).
Conquering cancer will require deeper dives into genomic data and better noninvasive methods for diagnosing both the presence of cancer and treatment efficacy. “Pulling It All Together” describes system-biology approaches that take this plunge, identifying functional elements, master regulators, and dysregulated but highly conserved genes that may serve as potential new drug targets. And the hype and hope about liquid biopsies is examined in “Banking On Blood Tests.”
There are lots of reasons for optimism, but no shortage of conundrums as researchers continue their quest to wipe out cancer.

10
Tourism & Hospitality Management (THM) / Shooting for the Moon
« on: April 21, 2017, 12:12:59 AM »
Defeating cancer is many times more difficult than planting a flag on our lunar satellite.
Last year, Vice President Biden said that with a new moonshot, America can cure cancer . . . .Let’s make America the country that cures cancer once and for all.
—President Barak Obama, State of the Union address, January 2016

The time has come in America when the same kind of concentrated effort that split the atom and took man to the moon should be turned toward conquering this dread disease. Let us make a total national commitment to achieve this goal.
—President Richard Nixon, signing into law the National Cancer Act of 1971, December 1970

In the decades separating these statements, such exhortations and promises didn’t come only from politicians; many scientists have also claimed that eradicating cancer was within reach. Beginning in February 2003, Andrew von Eschenbach, who served as director of the National Cancer Institute from 2002 to 2006, repeatedly named 2015 as the year by which he believed the disease could be vanquished.
So, in 2016, what’s the right degree of optimism to adopt? Treating cancer is enormously complicated by the fact that it is not a disease with just one root cause. It’s not even a single disease. But as massive databases accrue ever-more genomic information on myriad tumor types, cancer’s molecular underpinnings are coming into focus and beginning to guide the design of tailored therapies. Still, many conundrums remain.
This issue of The Scientist looks at progress in solving a number of those. Otto Warburg knew almost 100 years ago that tumor cells metabolize glucose differently than healthy ones. In “A Different Way of Doing Things,”  KivançBirsoy and David Sabatini dive into the diverse ways that cancer cells reprogram their own metabolism to fuel their rapid proliferation. New findings about these altered metabolic pathways are pointing to novel therapeutic possibilities for stunting cancer’s growth.
An appreciation for the importance of a tumor’s surrounding environment has also been growing for some time now. Solid and fluid pressures exerted by and on tumors influence proliferating cancer cells starved for nutrients and oxygen, and affect inflammation, metastasis, and drug delivery. Lance Munn and Rakesh Jain describe the interplay between cancer cells and components of the extracellular matrix (ECM) in which a tumor resides (“The Forces of Cancer,”). Targeting ECM components such as collagen and hyaluronan to alter physical stress on tumors could aid the delivery of antitumor drugs.
Metastasized cancers cause more than 90 percent of cancer deaths, but how that process occurs is still largely mysterious. An expanded version of The Literature section examines a number of recent reports about how exosomes released by primary tumor cells play a special role in preparing the metastatic site before the arrival of the primary tumor cells themselves. “They’re terraforming the envi¬ronment to make it hospitable,” is how one researcher describes it. Exosomes are also possible targets for reducing cancer’s spread.
A relatively new conundrum in cancer research, and a complex one at that, is the role of the microbiome in both spurring and protecting against tumorous growth. Kate Yandell reports on how alterations to the gut microbiome and attendant changes in immune-modulated inflammation have been implicated in the progression of cancer and in the action of many anticancer therapies (“Microbes Meet Cancer”).
Conquering cancer will require deeper dives into genomic data and better noninvasive methods for diagnosing both the presence of cancer and treatment efficacy. “Pulling It All Together” describes system-biology approaches that take this plunge, identifying functional elements, master regulators, and dysregulated but highly conserved genes that may serve as potential new drug targets. And the hype and hope about liquid biopsies is examined in “Banking On Blood Tests.”
There are lots of reasons for optimism, but no shortage of conundrums as researchers continue their quest to wipe out cancer.

11
Global influx of machines set to open hot new tech market.
SAN FRANCISCOAn army of robots is on the move.
In warehouses, hospitals and retail stores, and on city streets, industrial parks and the footpaths of college campuses, the first representatives of this new invading force are starting to become apparent.
“The robots are among us,” says Steve Jurvetson, a Silicon Valley investor and a director at Elon Musk’s Tesla and SpaceX companies, which have relied heavily on robotics. A multitude of machines will follow, he says: “A lot of people are going to come in contact with robots in the next two to five years.”
The arrival of the robots—and their potentially devastating effect on human employment—has been widely predicted. Now, the machines are starting to roll or walk out of the labs. In the process, they are about to tip off a financing boom as robotics—and artificial intelligence—becomes one of the hottest new markets in tech.
After growing at a compound rate of 17 percent a year, the robot market will be worth $135 billion by 2019, according to IDC, a tech research firm. A boom is taking place in Asia, with Japan and China, which is in the early stages of retooling its manufacturing sector, accounting for 69 percent of all robot spending.
Although the amount of money flowing into a new robotics industry is still at a relatively early stage, all the lead indicators of the innovation economy are pointing up. Patent filings covering robotics technology—one sign of the expected impact—have soared. According to IFI Claims, a patent research company, annual filings have tripled over the past decade. China alone accounted for 35 percent of robot-related patent filings last year—more than double nearest rival Japan.
In another sign of the expected boom, venture capital investments more than doubled last year to $587 million, according to research firm CB Insights.
Other investors are also piling in, says Manish Kothari of SRI International, a Silicon Valley research and development lab that has spun off robot companies. From private equity investors looking to build portfolios of robot investments, to new “incubators” such as Playground, started by former Google robotics chief Andy Rubin, the investment options have been proliferating rapidly.
But in many cases, the amounts being invested still seem disarmingly modest. Like other disruptive technologies, the seeds of this revolution can be seen in start-ups that operate on a shoestring but have grandiose aims.
They include companies such as Dispatch, a Silicon Valley company that is testing an autonomous delivery vehicle—a smart-box on wheels—on two college campuses in the U.S. The start-up has raised only $2 million but is riding the wave in collapsing costs of sensors and advances in artificial intelligence that are making autonomous machines a reality.
“There is an exponential pace of improvement in hardware and machine learning algorithms,” says co-founder Uriah Baalke. “The computational power required has gone down a lot.” The result is a new class of machines that can operate by themselves in human space, the advance guard of a new robot industry.
Until now, most robots have taken the form of expensive, high-precision industrial machines. Usually found operating in protective cages on automobile assembly lines, they have carried out preprogrammed tasks, with no need or scope to adapt to changing conditions.
The cheaper, flexible machines that are emerging are designed to be more adaptive. From driverless cars and drones to the “cobots” that work alongside humans in industrial settings, they try to sense and adapt to their surroundings. Like Tug (a robot that moves supplies around hospitals), Savioke (which handles deliveries to hotel rooms) and Locus Robotics (which operates in warehouses), they are moving into the service industries. In industrial settings—still the main venue for robot investment—they are moving out of the cages and into a far wider range of roles.
Like the arrival of PCs, the new era promises to take the technology into many more areas of working life. “The traditional industrial robots are mainframes—what we’re doing are PCs,” says Scott Eckert, chief executive of Rethink Robotics, a U.S. company whose robots help with packing or tend machines. Rethink says that the all-in cost of its Sawyer robotic arm amounts to about $1 an hour, a price at which many of the jobs that have been beyond the reach of automation could be affected.
The technology advances behind this wave of innovation have come together remarkably quickly. Funding over the past five years by Darpa, the research arm of the U.S. Defense Department, has brought breakthroughs in mechanical areas such as robotic limbs, says SRI International’s Kothari.
But the biggest advances have come in software. Improvements in computer vision, for instance, have made possible many companies like Dispatch, whose machines rely on being able to “see” the world around them, says Chris Dixon, a partner at venture capital firm Andreessen Horowitz.
Machine learning algorithms, which are designed to adapt through an endless process of trial and error, play the biggest part in teaching robots how to navigate a world beyond the normal rules-based systems that computers are designed to handle.
“You won’t have to programmatically tell it what to do; it will figure it out,” says Vinod Khosla, a venture capitalist who has backed robot companies in markets including agriculture and health care. “Today, it’s really dumb intelligence—but that will change quickly.”
When it comes to designing the machines for this emerging industry, most robot entrepreneurs and investors are following a similar formula.
One element is to build low-cost machines that tackle specific tasks, rather than attempt to create general-purpose machines—let alone fully humanoid robots—that try to take on too much.
The goal is to build “single-purpose robots that do one thing very well”, says Dmitry Grishin, a Russian who recently raised a $100 million fund to invest in robots and other hardware. If they succeed, these machines quickly lose their status as “robots” and become more part of the fabric of everyday life, he says—like automated vacuum cleaners or cash machines.
Another design feature of many of the early robots is to operate alongside people, making humans more productive rather than replacing them altogether. Many of these robots, for instance, hand over decision-making to a human operator when they encounter situations they cannot understand or navigate.
“The truth is, anyone who works in robotics knows the limitations of what they’re working with, and they’re pretty extensive,” says Kothari. Robot companies also want to keep “the human in the loop” because they believe it will make their machines more socially acceptable and less threatening, he says. Most people operating in the robot industry say humans will have an important role to play in directing the machines for decades to come.
That does not change the long-term threat to jobs, however. “There isn’t a single mechanical or physical thing a human will be able to do better than a robot,” says Tesla and SpaceX’sJurvetson.
Another feature the robot makers are counting on is to be able to use the learning capabilities of their initial products to achieve rapid improvements and gain an advantage over rivals that are slower to get their machines into the market.
“Once you ship the device, you can apply more and more intelligence and machine learning,” says Grishin, the Russian robot investor. The trick, he says, will be to find a task that the relatively dumb machines are able to handle, then use knowledge gained in the field to rapidly add to their capabilities and usefulness. “First put them in consumers’ hands, then learn from their behavior.”
This is the secret weapon that all robot companies rely on. “Everything gets better over time,” says Jurvetson. “This is happening in almost every hardware product: they are becoming minimal vessels for software.”
This technological shift has set traditional robotics leaders in Japan and Germany against nascent industries in countries such as the U.S. and China.
“Right now, the U.S. is definitely the leader” when it comes to software, says Grishin. He adds, however, that the hardware manufacturing expertise of China makes that country a contender, particularly since robotics has become a national priority. As a result, the rise of a new robot industry is about to trigger a global race for leadership.

12
Public Health / New Human Brain Language Map
« on: April 20, 2017, 09:52:52 PM »
Researchers find that Wernicke’s area, thought to be the seat of language comprehension in the human brain for more than a century, is not.
The map of language centers in the human brain is being redrawn. Researchers at Northwestern University have determined that Wernicke’s area, a hotdog-shape region in the temporal lobe of the left hemisphere, may not be the seat of language comprehension, as has been scientific dogma for the past 140 years. Instead, the team suggests in a study published today (June 25) in the neurology journal Brain, understanding the meaning of words happens in the left anterior temporal lobe, while sentence comprehension is handled by a complex network of brain areas.
“This provides an important change in our understanding of language comprehension in the brain,” Marek-MarselMesulam, lead study author and director of Northwestern’s Cognitive Neurology and Alzheimer’s Disease Center, said in a statement.
Neuroscientist Carl Wernicke discovered in 1874 that some stroke victims with damage to the left sides of their brains suffered language impairment, which came to be known as Wernicke aphasia. Because those patients could often speak clearly, though nonsensically, and had trouble understanding simple instructions, Wernicke and other researchers surmised that the patients’ strokes had damaged the language comprehension center of the brain.
Instead of working with stroke victims, Mesulam and his colleagues studied patients with a rare form of language-affecting dementia called primary progressive aphasia (PPA). Mesulam, who is a leading expert on PPA, realized that PPA patients with damage to Wernicke’s area did not exhibit the same same trouble with word meaning as stroke victims with similar brain damage.
So he and his colleagues performed language tests and brain MRIs on 72 PPA patients with damage inside and outside of Wernicke’s area. They found that PPA patients with reduced cortical thickness in Wernike’s area could still understand words, but had varying degrees of trouble comprehending sentences. None of them had the widespread problems with language comprehension noted in stroke victims.
PPA and stroke damage the brain differently; in PPA, cortical areas degenerate, but their underlying fiber pathways, necessary for communication between different language centers in the brain, remain intact. Stroke, however, damages large swathes of brain matter.
“In this case, we saw a different map of language by comparing two different models of disease, one based on strokes that destroy an entire region of brain, cortex as well as underlying pathways, and the other on a neurodegenerative disease that attacks mostly brain cells in cortex rather than the region as a whole,” Mesulam said in the press release.
This means that language comprehension is much more diffuse and complicated in the brain, and the process likely relies on many interconnected brain regions, rather than one constrained area. “There is no center but a network of interconnected areas, each with a slightly different specialization,” MesulamtoldMotherboard.

13
Business & Entrepreneurship / Neural Basis of Risk Aversion
« on: April 20, 2017, 09:51:48 PM »
Researchers identify and manipulate a signal in the brains of rats that controls risky behavior.
Humans and many other animals are generally risk-averse, meaning they will usually choose stable, certain rewards over risky ones, even if the average payoff from both options is the same. Now, researchers at Stanford University have identified—and manipulated—a specific signal in the brains of rats that determines risky behavior just before the animals make a decision. The findings were published yesterday (March 23) in Nature.
“It turns out you can explain a large part of whether rats were risky or not by this particular signal at this particular time,” study coauthor Karl Deisseroth of Stanford told The New York Times. “We saw it happen, and then we were able to provide that signal, and then see that we could drive the behavior causally.”
Previous research had suggested that an area of the forebrain called the nucleus accumbens plays an important role in decision-making. Part of the reward system of the brain, this region contains neurons with receptors for dopamine, a chemical producing feelings of pleasure.
To investigate the role of these neurons in risky decision-making, the researchers implanted an optical fiber into the nucleus accumbens of 17 rats. They then provided the rats with two levers: one that always prompted the release of a medium-size sugar water reward, the other resulting in a usually much smaller—but sometimes much larger—reward.
As expected, the researchers found that most rats were risk-averse, preferring certain, medium-size rewards most of the time. They also observed elevated activity of dopamine receptor–containing neurons in these rats’ nucleus accumbens in the moments before the animals pulled a lever. Risk-prone rats, meanwhile, showed much lower activity in these cells.
Using optogenetics, the researchers were able to induce risk-averse behavior by stimulating dopamine receptor–containing neurons artificially, even in otherwise risk-prone rats. Administering a drug that stimulates dopamine receptors, however, had the opposite effect.
“We are now that much closer to solving that most fascinating of questions: How does the brain use patterns of neural activity to make decisions?” Catharine Winstanley of the University of British Columbia who was not involved in the work told The Atlantic. “Such information is revolutionary for neuroscience, but will also help us to understand what has gone wrong in disorders of maladaptive decision-making, such as gambling and substance-use disorder.”

14
Science and Information / Nanoscale Defenses
« on: April 20, 2017, 09:48:19 PM »
Coating hospital surfaces, surgical equipment, patient implants, and water-delivery systems with nanoscale patterns and particles could curb the rise of hospital-acquired infections.
Picture a hospital room: white walls, stainless steel IV poles and bedrails, scratchy bedsheets. For more than 100 years, this has been the standard hospital environment, and for most of that time, isolating patients in hygienic rooms, instead of en masse in group clinics or sanatoria, has helped curb the spread of infections that once killed nearly half of soldiers on the battlefield and more than a third of newborn infants. But with the recent rise in antibiotic-resistant pathogens, this standard is no longer sustainable. In 2011, the most recent year data are available from the US Centers for Disease Control and Prevention (CDC), some 720,000 patients acquired an infection while being treated in a health care facility; more than 75,000 of those people died.
“Imagine one full jumbo jet crashed each day, killing everyone on board,” says Michael Schmidt, vice chairman of microbiology and immunology at the Medical University of South Carolina (MUSC). “This is precisely the number of people that die each day in the U.S. from a hospital-associated infection.”
In the face of such hospital-acquired, or nosocomial, infections, and the impossibility of developing effective antibiotics quickly enough, researchers are looking to update that white-walled hospital room. The stainless steel IV poles and bedrails could, for example, be coated in nanoparticles of metallic copper, which has antimicrobial properties. Long, thin filaments of nontoxic, bactericidal zinc could provide a protective metallic coating to those scratchy bedsheets, as well as the curtains and paper towels. And the nurse call button and other surfaces could be etched with nanopillars that kill bacteria on contact. 
Such nanoscale technologies can decrease the ability of bacteria to adhere to and grow on surfaces by increasing the permeability of bacterial membranes, disrupting protein function, and interfering with cell-cell communication. Moreover, by preventing bacteria from attaching to and colonizing surfaces, these technologies also make it impossible for the microbes to reach a critical population size (the threshold number varies by orders of magnitude depending on the strain) that triggers the formation of a biofilm, which inhibits the entry of antibiotics and disinfectants.
Preliminary in vitro studies have demonstrated that a diverse array of nanotechnologies are effective against some of today’s most dangerous pathogens, such as methicillin-resistant Staphylococcus aureus (MRSA), carbapenem-resistant Enterobacteriaceae (CRE), and Klebsiellapneumoniae. The widespread implementation of these approaches could serve as a stopgap until new therapies are available, and provide additional protection against infection to keep hospitals safe. In addition, nanopatterns designed to be harmless to host cells could be applied to synthetic implants to ward off bacterial growth. As researchers continue to refine and test these approaches against a range of pathogens, the full utility of microbe-resistant materials will become clear.
Bacteria-resistant surfaces
A few years ago, Schmidt and his colleagues at MUSC tested the idea that an environment coated in copper could stem the spread of infection in a hospital setting. The goal was to harness the antimicrobial power of metallic copper ions, which interact with bacterial surface proteins, damage cell membranes, and are passively or actively uptaken into the cytoplasm. Once inside the bacteria, copper ions form free radicals that damage intracellular proteins and lipids. (See illustration on opposite page.) Copper can even sometimes repel microbes from ever colonizing a surface in the first place. Bacterial membranes and cell walls are studded with proteins that initiate adhesion to surfaces; positively charged copper ions interact with these negatively charged bacterial adhesion proteins to distort protein shape and function, and can outcompete other metals such as zinc that are essential for protein function.
Schmidt and his colleagues refitted several hospital intensive care unit (ICU) rooms using metallic copper alloy surfacing to cover bedrails, IV poles, nurse call buttons, and visitors’ chairs, then randomly assigned patients to either the copper-laden rooms or rooms disinfected using standard protocols. After one year of observation across three separate hospitals, the concentration of bacteria on the surface of the copper-covered objects was a fifth of that on objects in standard ICU rooms, and the rates of nosocomial infection among patients assigned to “copper rooms” were almost 60 percent lower than those in the control rooms.1
However, the MUSC team’s method required that the copper coatings be constructed as full pieces of hardware, which can be prohibitively expensive and time-consuming to install in most modern health care settings. As an alternative, researchers may simply be able to apply metallic nanoparticles to an existing surface to achieve similar antibacterial effects. In 2013, Northeastern University chemical engineer Thomas Webster, president of the US Society for Biomaterials, teamed up with one of his former graduate students to create a selenium nanoparticle spray that can be applied to any surface to cut down on microbial numbers.2 The spray dries within minutes, leaving a layer of antimicrobial nanoparticles behind, and was shown to be nonhazardous in small-animal toxicity studies. Testing the spray on common hospital items, including chairs, bedsheets, and even paper towels, the researchers found that it decreased the overall microbial burden.3 Crucially, the nanoparticles were stable and active until the surfaces were used or washed. “We have seen we can turn almost any material into one that reduces bacterial adhesion and growth, all by implementing nanoscale features,” says Webster.
Another option for preventing microbial growth is to design antibacterial nanoscale features that can be etched into a variety of synthetic materials. Nanoscale pits or troughs can trap bacteria and prevent cell-cell communication, for example. Comparing micron- to nanometer-size troughs, Joanna Verran’s group at Manchester Metropolitan University in the U.K. showed that 200-nm troughs decreased adhesion of three bacterial strains and one yeast strain. As the feature sizes increased (from 500 nm to 2 µm), microbes that preferentially produced biofilms adhered to surfaces more readily: MRSA started adhering at 500 nm, Pseudomonas aeruginosa at 1–3 µm, and Candida albicans at 2 µm.
Alternatively, nanospikes can kill bacteria by penetrating their cell membranes, controlling microbial growth. In 2013, for example, Albert Yee’s team at the University of California, Irvine, showed that nanopillars that mimic the texture of a cicada wing aid in killing gram-negative bacteria such as Escherichia coli andKlebsiella.5 And in research presented at this year’s American Chemical Society conference in San Diego, the team demonstrated that nanopillars of a slightly different shape modeled after the topography of a dragonfly’s wing are able to kill gram-positive bacteria such as MRSA.6
As an added bonus, such nanopatterned surfaces can often be equipped with functional components, such as biopolymers and antibiotic side chains, that can further decrease the microbial load on a given surface. In a proof-of-concept study, Virginia Davis’s lab at Auburn University in Alabama designed a sheet of antimicrobial single-walled carbon nanotubes (SWNTs). Normally used in electronics development, SWNTs are extremely stable at high temperatures, pressures, and shear stresses, and have a carbon backbone that allowed the team to attach the antibacterial protein lysozyme. In as little as 30 seconds, the material increased bacterial killing by 50 percent compared with nonlysozyme controls.7 Davis and her colleagues formulated sheets of SWNTs as thin as 1.6 nm, allowing the nanomaterial to interact more effectively with bacteria, which have component parts as small as 0.5 nm.
It is quickly becoming clear that using the innate or modified ability of metallic ions and nanopatterned surfaces to kill bacteria or prevent them from forming impenetrable biofilms can be effective and relatively easy. Crucially, pathogens should be unable to evolve resistance to nanosurface strategies of fighting infection: nanoparticle surface energy can always distort protein function on a purely chemical basis, and nanotopographical features will always trap or lyse bacteria. Researchers are now working to develop scalable approaches such as high-throughput etching to tailor nanomaterials to fit large-scale antimicrobial needs.
Nanopatterned devices
Other surfaces that are prone to bacterial growth are those of medical implants, such as hip and knee replacement joints or artificial heart valves. In fact, up to half of all nosocomial infections result from implanted devices, and microbial biofilm growth is a significant cause of implant removal. Moreover, bacteria do not need to be antibiotic resistant to cause an issue, as niches within implants can shelter biofilm-forming microbes from antibiotics and host immune systems. 
New nanotechnologies are poised to prevent such problems, however. The use of nanosilver as a coating on embedded medical devices has already demonstrated the ability to inhibit biofilm formation.8 And patterning the plastic or metal surfaces of these implants with nanoscale pillars or pits could similarly decrease the growth of bacterial cells.
Nanopatterned surfaces can also improve host tolerance of the implant, potentially reducing healing time and pain after surgery. Various patterns of blocks less than 10 nm across prevent communication between bacteria, for example, while allowing relatively large, more flexible mammalian cells (~10–120 μm) to adhere. Christopher Bettinger of Carnegie Mellon University and his colleagues showed that long troughs called nanogratings etched into a solid surface promote the elongation of mammalian endothelial cells and eventual blood vessel formation.9 Other research has shown that rough, 50-nm nanotroughs can increase bone formation by osteoblasts while decreasing microbial adhesion and biofilm formation. “Small, long nanofeatures do not allow the somewhat stiff bacteria to attach, yet they allow mammalian cells to function,” explains Webster. These nanopatterned surfaces would be safer in the long run, as mammalian cells take over and grow into vasculature, bone, cartilage, and other tissues.
Alternatively, researchers have used nanopillars like those designed by Yee’s team at UC Irvine to kill any bacteria that land on an implant surface, while leaving mammalian cells unaffected. Bacteria lack cholesterol and other large chemical groups that provide the flexibility to mammalian cells, making bacteria 5 to 20 times stiffer. Thickness of the peptidoglycan cell wall surrounding bacteria can also limit fluidity. As a result, bacteria are punctured by the nanostructures, while mammalian cells are able to “melt” into spaces between nanoscale patterns and grow across the surface.
If successfully developed as safe design changes to implanted devices, such nanopatterns may also help prevent microbial spread via surgical instruments. Troublingly, there were 157,000 surgical site infections in the U.S. in 2011 (21.8 percent of all nosocomial infections), and some of them resulted from use of improperly sterilized surgical equipment. In the last year alone, the US Food and Drug Administration (FDA) handed down warning letters to three makers of duodenoscopes for improper sterilization procedures and lack of infection reporting: several people died in numerous hospitals after physicians reused improperly sterilized scopes that passed Pseudomonas aeruginosa and CRE to at-risk patients. In October 2015, the FDA required the companies (Olympus, Pentax, and Fujifilm) to submit new protocols for sterilization procedures.
Within a couple of months, Fujifilm issued revised cleaning instructions for its duodenoscopes and received FDA approval to continue production. And in January 2016, the agency declared that Olympus had provided the necessary modifications to its duodenoscope to prevent leakage of patient fluids into a sealed area inside the device that was harboring hazardous bacteria. At time of writing, Pentax was still working with the FDA to mitigate the potential for cross-contamination due to its devices. None of these companies, however, suggested nanocoatings as a solution. If research continues to show the effectiveness of antimicrobial nanopatterns, it would behoove endoscope manufacturers to consider such technologies. As the steps required for health care personnel to thoroughly clean and sterilize endoscopes between patients are arduous and complicated, nanocoatings in intricate interior chambers of these devices could ensure better microbial control.
Looking to the future
New approaches to discovering antibiotics have received much attention recently, and rightfully so. These techniques are crucial as the numbers of untreatable nosocomial infections continue to rise. But nanotechnological advances to stem such infections are fast becoming a viable strategy to supplement such drug-based approaches. Unfortunately, searches for “nano” and “surface” in clinicaltrials.gov turns up only seven ongoing trials, none of which are related to antimicrobial nanosurfaces. It’s now critical to promote the advancement of nanotechnologies into the clinical setting.

15
Humanities & Social Science / Mimicry Muses
« on: April 20, 2017, 09:46:54 PM »
The animal world is full of clever solutions to bioengineering challenges.
It’s August and deep into summer. Along the Atlantic coast, reports of shark attacks andstinging jellyfish invasions have beachgoers wary. The wildly popular but infamous Discovery Channel series Shark Week just concluded, stoking its usual quota of irrational fear, even though this year’s programming did include some more nods to actual science. Sharks aren’t just fodder for sensationalist filmmakers: how they and other marine creatures live in their watery world has fed the imaginations of biomedical engineers looking to design better medical techniques and products by mimicking nature.
In this issue’s cover story (“Inspired by Nature”), Daniel Cossins describes how shark-skin denticles inspired the construction of antibacterial surfaces; how jellyfish tentacles influenced the design of a technique for snagging rare cancer cells circulating in the bloodstream; how mussel proteins that harden underwater to attach the mollusks to rocks can serve as an effective surgical glue; and other amazing examples of biomimicry.
Nature plays muse to all scientists and still harbors many a secret in need of decoding.
Of course, nature plays muse to all scientists and still harbors many a secret in need of decoding. In the field of developmental biology, the unique and largely unexplored functional properties of an unusual (and transient) organ is the subject of “The Prescient Placenta,” by Christopher Coe, who details some of the maternal-fetal crosstalk necessary for mother and child to enjoy a successful pregnancy outcome.
Another developmental biology subject has recently become more mysterious—the embryonic origin of the lymphatic system. For a century, blood vessels were deemed the source of this unique drainage system that plays a critical role in immunological defense. Four recent papers that question this dogma are the subject of a special literature review by TS intern Amanda B. Keener.
By humanizing mouse tissues and immune systems, scientists are now improving the study of many diseases and the efficacy of drug testing, which will hopefully help investigators avoid many late-trial deleterious outcomes. (See “The Human Touch.”)
Epigenetics is the subject of two articles: a profile of Wolf Reik, a founding father of the field and witness to its progress from tedious biochemical measurement of genome methylation patterns to rapid whole-genome methylation sequencing in single cells ( “Leaving an Imprint”); and a Lab Tools detailing new computational techniques for decoding the consequent flood of epigenetic data (“Messages in the Noise”). This month’s other Lab Tools, “Get With the Program,” offers tips on how to dip your toes into computer coding to help customize analyses of these and other enormous data sets.
And finally, a real heads-up article. “Drugging the Environment” traces the journey of pharmaceuticals after manufacture, and the trail is a dispiriting one. Runoff from manufacturing facilities; drugs excreted unmetabolized from the human body and flushed down the toilet, or thrown unused into the trash; and pharmaceuticals pooped out by livestock—all make their way into the ecosystem, where researchers are just beginning to document their effects on wildlife.
So if you’ve been spooked into staying above the high-water mark at the beach, there’re plenty of true-life (and sometimes scary) stories in this issue to keep you occupied.

Pages: [1] 2 3