Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Mohammad Mahedi Hasan

Pages: 1 ... 10 11 [12]
166
Suicide is not an easy subject to broach. In fact, we're struggling to introduce it right now. But this article isn't meant to be dark or somber; it's actually the opposite. A 2017 study may offer some hope for individuals teetering on the decision to end their lives. Help may be on the way in the form of a mental health practitioner, an fMRI, and six simple words.

Help Me Help You

It's standard practice for a doctor to ask a patient if they've had or are having thoughts of suicide, but according to a 2003 study, we don't have a great way to intervene unless the patient outright answers "yes." 80 percent of patients who committed suicide denied having suicidal thoughts in their last contact with a mental health professional. The first way to help these individuals is to improve our ability to assess risk.

A 2017 study published in Nature Human Behaviour offers what could be a life-saving solution to the verbal denial of suicidal ideation in patients. Technology to the rescue! The researchers put volunteers in an fMRI brain scan and asked them to consider each of 30 positive, negative, and neutral words, one after the other, while their brain activity was analyzed. With this method, the researchers were able to identify suicidal individuals with 91 percent accuracy by using their brains' responses to six words: death, cruelty, trouble, carefree, good, and praise.

In the study, there were 17 control subjects and 17 participants who admitted to having suicidal thoughts. For the volunteers who had thought about suicide, words like death and cruelty activated the left superior medial frontal area and the medial frontal/anterior cingulate: areas associated with self-referential thought. Using a machine-learning algorithm, the researchers correctly identified 15 of the 17 brains of those with suicidal ideation and 16 of the 17 controls.


Get Out of My Head!

This research offers promise and hope, but it's not perfect. Using huge, heavy, expensive equipment like fMRIs isn't exactly discreet or practical, for one. "It would be nice to see if we could possibly do this using EEG, if we could assess the thought alterations with EEG," said Dr. Marcel Just, the lead author of the study. "It would be enormously cheaper. More widely used."

The mind-reading of the fMRI is far from perfect, too. "If somebody didn't want others to know what they are thinking, they can certainly block that method. They can not cooperate," said Just. "I don't think we have a way to get at people's thoughts against their will." fMRI itself even has its critics. That being said, understanding that there are differences in the brains of suicidal individuals and controls is a big step in the direction of saving more lives.Source: Web


167
Here's a conundrum: evolution is all about helping organisms produce more offspring. If a trait makes an animal more likely to die, that trait eventually disappears in favor of traits that help them live. But even if you never suffer from a single disease or injury, you're still going to die of old age. So why hasn't evolution put a stop to aging? According to researchers at the University of Arizona, it's because it's mathematically impossible.

Hurry Up And Wait

The current explanation for why evolution hasn't removed aging from the equation is that aging only becomes a problem after you're all done having children. When you can't produce offspring, evolution is done with you. (Why you stop reproducing at all is another question entirely). But evolutionary biology researchers Paul Nelson and Joanna Masel at the University of Arizona say it's more fundamental than that: aging is the price of admission into the multicellular organisms club.

The problem comes down to the two things that happen to aging cells: they either slow down or speed up. You can see cells slow down as you go gray, when your hair cells stop producing pigment; and as you get wrinkles, when your skin stops producing the collagen and elastin that keep it smooth and hydrated. That's the kind of aging we're all accustomed to.

When cells speed up their growth rate, that's when people get worried. That can cause cancer cells to form. But despite how scary the C-word can be, cancer cells are just as much a part of aging as gray hair or wrinkles. It's just when they cause symptoms that they become a problem. For example, half of men over the age of 60 have prostate cancer, but it usually doesn't develop into anything harmful and most end up dying of some other cause.

Between A Clock And A Hard Place

Nelson and Masel modeled the evolution of the traits that make cells slow down and speed up, and what they found was pretty definitive. If you find a way to keep cells from slowing down — hooray, no more gray hair or wrinkles! — fast-growing cancer cells will take over and kill you. If you find a way to keep cells from speeding up — hooray, we found the cure for cancer! — sluggish, poorly functioning cells will accumulate and kill you.

"Aging is mathematically inevitable — like, seriously inevitable," Masel said in a press release. "There's logically, theoretically, mathematically no way out." Being multicellular means your cells will compete with each other. Throw off that balance, and you make things worse. We can lengthen human life, but we can't stop death, and that makes it even more important to embrace the time you've got.
Source:Web


كُلُّ نَفْسٍ ذَآئِقَةُ الْمَوْتِ وَإِنَّمَا تُوَفَّوْنَ أُجُورَكُمْ يَوْمَ الْقِيَامَةِ فَمَن زُحْزِحَ عَنِ النَّارِ وَأُدْخِلَ الْجَنَّةَ فَقَدْ فَازَ وَما الْحَيَاةُ الدُّنْيَا إِلاَّ مَتَاعُ الْغُرُورِ
Al-Imran (The Family of Imran) - 3:185
("Every soul shall have a taste of death")

168


When it comes to solving problems, even the most advanced quantum computers still use the same basic approach as your old desktop: do as many calculations as you need to to get the solution. But what if instead of working through the calculations, a computer could just "know" the solution outright? That far-fetched idea is the basis for a weird new breakthrough in computer science. How could such a thing be possible? With particles of "magic dust" they call polaritons.

Just Crazy Enough To Work

The "optimal" solution to a mathematical problem — from modeling the way proteins fold to figuring out how the stock market behaves — is the simplest one possible. A computer's task is to figure out a way to solve a problem with the absolute minimum number of steps. The way computers do this now is via the "brute-force" method, where they calculate and calculate and calculate until they finally happen on the optimal solution. To be fair, that can happen with wicked speed — at the time of this writing, the world's fastest supercomputer can carry out 93 quadrillion calculations per second — but it's not the most efficient approach.

A University of Cambridge press release compares that search for an optimal solution to a hiker trying to find the lowest point on a mountain range. "A hiker may go downhill and think that they have reached the lowest point of the entire landscape, but there may be a deeper drop just behind the next mountain. Such a search may seem daunting in natural terrain, but imagine its complexity in high-dimensional space." To overcome that challenge, Cambridge professor Natalia Berloff and her colleagues came at it from a completely new angle: instead of using a hiker, what if you used a magical dust that settled into each valley, but only glowed at the deepest level?

"A few years ago our purely theoretical proposal on how to do this was rejected by three scientific journals," said Berloff. "One referee said, 'Who would be crazy enough to try to implement this?!' So we had to do it ourselves, and now we've proved our proposal with experimental data."

Bibbidi Bobbidi Boo

This magical dust is made up of real quasiparticles called polaritons, which are created when a laser hits stacked layers of specific atoms to create a weird combination of matter and light. These super-lightweight particles easily crowd together and sync up to form a state of matter known as a Bose-Einstein condensate. In essence, that turns these ultra-tiny, hard-to-measure particles into one object that lights up in ways you can detect.

Once they had the magic dust, they just needed a mountain to try it out on. That's where the metaphor turns weirdly literal. See, there are ways you can map pen-and-paper math formulas into quantum models that deal with actual particles in actual space. One of these is known as the XY model, a fundamental optimization problem that can be represented on a graph. In a paper published in 2017 in Nature Materials, Berloff and her colleagues demonstrated that they could make the condensing polaritons arrange themselves on the vertices of a graph in a configuration that corresponded to the "absolute minimum of the objective function" — i.e. the optimal solution.

This is the first step in many to come. "We are just at the beginning of exploring the potential of polariton graphs for solving complex problems," said co-author Professor Pavlos Lagoudakis.
Source:web

169
Public Health / You Remember in the Opposite Order as You See
« on: November 30, 2017, 01:23:53 PM »
The thing about Monday morning quarterbacks is that they can see the whole game — they can focus on all the little details in the context of the whole. But for the players in the game as it happens, only the details exist and the big picture comes into view later. And as it turns out, that big-to-small versus small-to-big approach is pretty much how memory and experience always go down.

Looking Back on the Big Picture

It's well-known that the brain processes sensory information starting with the details and working its way up to the big picture. So you can forgive Ning Qian, the principal investigator behind this new study, for assuming that we'd experience our memories in the same way. Instead, it turned out that memory went about it the exact opposite way. Here's how they figured that out.

Twelve subjects were shown a line at a 50-degree angle for a half-second, then asked to position two dots to indicate the angle of the line. They then repeated this task 50 more times. Then things got really interesting, because they were asked to repeat the same thing 50 more times, only with a 53-degree angle. In their third and final task, they were shown both lines at once, and had two pairs of dots to position. Whew — we're about to faint from all this excitement.

Okay, okay, so the test wasn't so exciting for the participants. But the experimenters were absolutely thrilled. There's not a lot of difference between 50 and 53 degrees, so you'd think that somebody trying to draw one and then the other would occasionally miscalculate and draw the smaller angle as larger. But in fact, the participants had a firm handle on the relationship between the two lines, suggesting that for memory, the big picture is the primary experience.

Remembering Patterns

Here's another way of stating what was happening in the participants' brains. Essentially, they were first encoding the angle of one line, then encoding the angle of the other, and finally putting that information into a big picture understanding of how the lines related to each other. But when they were decoding — that is, remembering — they started from the relationship between the two lines and estimated the angles based on that.

That means that if you've already identified a larger pattern in a sequence of events that you remember, you are more likely to remember the specific details that support that pattern (and, incidentally, forget the ones that don't). That could reveal a lot about the ways that people make decisions based on preconceived notions, and that's important since your biases could cost you big time.
Soure:web

170

It's no secret robots are a growing threat to the human workforce. The World Bank estimates 57 percent of all jobs could be automated within the next 20 years. So how can humans stay one step ahead of the computers? By emphasizing something AI can never replicate: good old-fashioned human curiosity.

According to new research by SurveyMonkey, most employers don't put a high value on curiosity now, but those that do stand a greater chance of surviving in the future.

The Great Divide

SurveyMonkey's results show that many company leaders believe curiosity is key to their own success. Nearly half (49 percent) of American business leaders say curiosity is directly tied to the ability to make more money. But people lower on the ladder don't necessarily feel the same way - only 22 percent of people believe in the financial rewards of a curious mind.

What's interesting is that for all the talk of boundary-pushing among the higher-ups, few bosses are actually inspired by one another. Just 17 percent of business leaders are inspired by their peers, while 31 percent of more junior workers are inspired by people at their level.

So what's going on here? How can junior workers see the possible financial benefits of curiosity and how can managers find the inspiration that their subordinates have?

It all depends on how your workplace treats questions.

Riddle Me This

SurveyMonkey CEO Zander Lurie says company culture determines the openness to new ideas. "Speaking up in a meeting with a question that challenges authority or the status quo might be intimidating if curiosity isn't encouraged," Lurie said. "If your organization skews towards a Culture of Genius, then you're not going to generate the collaborative and curious questions from more junior teammates. When some minds are recognized or rewarded as inherently more brilliant than others, the 'have-nots' will be reluctant to share opinions and ask questions as a result. To foster a Culture of Curiosity where these fears don't exist, leadership needs to make asking questions part of an organization's culture. Celebrate the curious insights the questions beget."

As for those robots coming to sweep humanity aside, Lurie says leaning on our natural curiosity will give us the edge in the marketplace. "I could imagine that in the near future it becomes a standard practice for companies to use some type of a Curiosity Quotient score and make it part of hiring requirements, annual performance reviews, promotions, and financial rewards, all of which can help businesses stay more competitive in the market and gain unique advantages," Lurie said. "We aren't going to beat the robots at work ethic or attention to detail — so we better stay curious!"


Make Your Office Extraordinary

So how can your company make sure everyone is empowered to add their two cents? SurveyMonkey offers these tips:

    Make questions and curiosity central to your daily work and the company culture. Create an environment of transparency where people can get genuine answers and all kinds of questions are valued — this is especially important for the next generation of leaders.
    Establish a safe space where people can ask with no fear.
    Hire a diverse team where different points of view and questions can inspire more learning. It should be blindingly obvious by now that your business is in trouble if you don't have diversity on your senior leadership team and board of directors.
    Practice the art of questioning. It's not only one of the best ways to stay informed, but asking 'why' helps you identify and understand the motivations driving employees, customers, and partners. There is so much information available to businesses these days, but we don't have the information to get to the 'why' — this only comes from staying curious and asking questions.
    And reward these great questions! It's equally important to reward questions that drove innovation both through peer recognition programs and the leadership team's announcements, and to highlight when curiosity led to failure. "Hey, we thought we had a good idea, we tested our hypothesis, it failed, we learned!" Celebrating when you swung and missed gives teams the confidence to keep trying.Source:Web


171
Public Health / Almost Every Mammal Gets About 1 Billion Heartbeats
« on: November 12, 2017, 02:12:30 PM »


It's strange to think that you have a limited number of heartbeats in your life, but at least you can take some solace in the fact that no one can ever say how many exactly you'll get, right? Wrong. Science knows. And it's about the same for almost every mammal.

Same Song, Different Tempos

Rabbits live about three years. Elephants can get up into their 80s. But both of them get about one billion heartbeats in their lifetime. It's just that elephant hearts beat a lot slower. As it turns out, that number stays (roughly) the same across other species of mammals. You might also have noticed that elephants are slightly larger than rabbits are, and there seems to be a similar correlation between size and lifespan. Is this indicative of a deeper truth of biology? Or even of the universe? It just might be — but more on that in a moment.

Before we get into the really head-trippy stuff, let's talk about the exceptions to the rule. Thanks to modern medicine, food preservation technology, and our habit of purifying our water sources, human beings have been able to extend our hearts' lives far past the one billion limit. We get a little bit more than two billion heartbeats in our lives, and who knows? In the future, we might be able to push it up to three.

Living at Large Scales

In 1999, physicist Geoffrey West and biologists Jim Brown and Brian Enquist found unexpected, interdisciplinary common ground when they set out to find how exactly animals' energy use and needs scale as they get larger or smaller. There'd been a fair amount of research into this already — in the 1930s, biologist Max Kleiber penned what's known today as "Kleiber's Law": metabolic rates scale to the three-quarter power instead of increasing at a one-to-one ratio. This is a bit complicated, but stay with us. Basically, it means that a cat, which is 100 times larger than a mouse, does not use 100 times the energy that a mouse does. Instead, it requires 1003/4 times the energy of a mouse — and that's only about 31.6 times the mouse's requirements.

But what this trio of scientists wanted to discover was far more complex than mere metabolism rates. They wanted to find out how characteristics such as lifespan and pulse rate scaled as well. Those features didn't have the exact same rate as metabolism, but they clearly correlated with each other. Lifespan tended to scale to the power of one-quarter, and heart rate at the power of negative one-quarter. In other words, there's a mathematical equation that lets you predict exactly how long an animal will live and how fast its heart will beat, based on its size. Source:Web


172
Stephen Hawking’s 1966 doctoral thesis has broken the internet after becoming available to the general public for the first time.

Demand for the thesis, entitled Properties of Expanding Universes, was so great on Monday that it caused Cambridge University’s repository site to go down. The site was still inaccessible at 7.30pm on Monday.

The “historic and compelling” thesis had swiftly become the most-requested item in Cambridge’s open access repository, Apollo.

The university made the essay public at midnight on Sunday to mark Open Access Week after hundreds of readers sent in requests to download Hawking’s thesis in full.

A University of Cambridge spokesperson said: “We have had a huge response to Prof Hawking’s decision to make his PhD thesis publicly available to download, with almost 60,000 downloads in less than 24 hours.

“As a result, visitors to our Open Access site may find that it is performing slower than usual and may at times be temporarily unavailable.”

The work considers implications and consequences of the expansion of the universe, and its conclusions include that galaxies cannot be formed through the growth of perturbations that were initially small.

However, until the university increases the website’s capacity to deal with requests, or demand falls, the paper is likely remain unavailable to many of those trying to access it.

Properties of expanding universes / Stephen Hawking (PhD.5437)
https://cudl.lib.cam.ac.uk/view/MS-PHD-05437/1


Some implications and consequences of the expansion of the universe are examined. In Chapter 1 it is shown that this expansion creates grave difficulties for the Hoyle-Narlikar theory of gravitation. Chapter 2 deals with perturbations of an expanding homogeneous and isotropic universe. The conclusion is reached that galaxies cannot be formed as a result of the growth of perturbations that were initially small. The propagation and absorption of gravitational radiation is also investigated in this approximation. In Chapter 3 gravitational radiation in an expanding universe is examined by a method of asymptotic expansions. The 'peeling off' behaviour and the asymptotic group are derived. Chapter 4 deals with the occurrence of singularities in cosmological models. It is shown that a singularity is inevitable provided that certain very general conditions are satisfied.

Source: The Guardian

173
Can you catch Alzheimer’s disease? Fear has been growing that the illness might be capable of spreading via blood transfusions and surgical equipment, but it has been hard to find any evidence of this happening. Now a study has found that an Alzheimer’s protein can spread between mice that share a blood supply, causing brain degeneration.

We already know from prion diseases like Creutzfeldt-Jakob Disease (CJD) that misfolded proteins can spread brain diseases. Variant CJD can spread through meat products or blood transfusions infected with so-called prion proteins, for example.

Like CJD, Alzheimer’s also involves a misfolded protein called beta-amyloid. Plaques of this protein accumulate in the brains of people with the illness, although we still don’t know if the plaques cause the condition, or are merely a symptom.

There has been evidence that beta-amyloid may spread like prions. Around 50 years ago, many people with a growth disorder were treated with growth hormone taken from cadavers. Many of the recipients went on to develop CJD, as these cadavers turned out to be carrying prions. But decades later, it emerged in postmortems that some of these people had also developed Alzheimer’s plaques, despite being 51 or younger at the time.

Protein plaques

The team behind this work raised the possibility that some medical or surgical procedures may pose a risk.

Now a study has found that, when a healthy mouse is conjoined with a mouse with Alzheimer’s plaques, it will eventually start to develop plaques of beta-amyloid protein in its own brain. When the plaques form in healthy mice this way, their brain tissue then starts dying.

This suggests that Alzheimer’s can indeed spread via the beta-amyloid protein in blood. “The protein can get into the brain from a connected mouse and cause neurodegeneration,” says Weihong Song at the University of British Columbia in Vancouver, who led the work.

Song’s team conducted their study on mice with a gene that makes the human version of beta-amyloid, because mice don’t naturally develop Alzheimer’s. This gene enabled mice to develop brain plaques similar to those seen in people, and to show the same pattern of neurodegeneration.

Induced illness

The team then surgically attached mice with this Alzheimer’s-like condition to healthy mice without the beta-amyloid gene, in a way that made them share a blood system.

At first, the healthy mice started to accumulate beta-amyloid  in their brains. Within four months, the mice were also showing altered patterns of activity in brain regions key for learning and memory. It is the first time that beta-amyloid has been found to enter the blood and brain of another mouse and cause signs of Alzheimer’s disease, says Song.

“They somewhat convincingly show that it is possible to induce [the plaques] in mice just by connecting the circulation,” Gustaf Edgren at the Karolinska Institute in Stockholm, Sweden. “It strengthens the case that amyloid beta is infectious somehow – it may actually be a prion or act like a prion.”

These findings contradict a study earlier this year by Edgren and his colleagues, which tracked 2.1 million recipients of blood transfusions across Sweden and Denmark. They found that people who received blood from people with Alzheimer’s didn’t seem to be at any greater risk of developing the disease.

Infectious protein

Edgren says that although his own study was very large, there’s still a chance it did not run long enough to catch evidence that Alzheimer’s proteins might be transmissible. “We only have follow-up for 25 years,” he says. “It could take a long time [for the disease to develop], or there could not be enough data. A lot of researchers fear that it’s an infectious protein.”

Song’s team say it is too soon to draw conclusions from their findings. Stitching mice together is not a situation that applies to people, says Edgren.

Mathias Jucker at the German Center for Neurodegenerative Diseases in Tübingen doesn’t think the study shows that Alzheimer’s is a transmissible disease. And the team have not yet looked at the behaviour of the mice to see if they show signs of the cognitive decline characteristic of Alzheimer’s.

In the meantime, Song thinks researchers and doctors should pay more attention to beta-amyloid in the blood, which could potentially be used to diagnose the disease. One of the reasons it has been difficult to treat Alzheimer’s is the difficulty of designing drugs that can cross the brain’s protective barrier. It may be easier to target the protein in the bloodstream, which could have knock-on effects for the brain, says Song.

Source: web

174
Public Health / When Sperm Meets Egg, Sparks Literally Fly
« on: October 27, 2017, 03:37:35 PM »
Light is sometimes used as a metaphor for life — you might "put someone's lights out" or tell them they're "the light of your life." But life and light aren't just linked poetically. When sperm meets an egg, there is a real flash of light. So in a way, life and light are one and the same.

Baby, You're A Firework

Researchers at Northwestern University made this startling discovery in 2011, when they saw what they call "zinc sparks" at the point of conception in mice. In 2014, they figured out a way to capture images of the event taking place. And in 2017, they achieved the holy grail of biological research: the team observed the same thing occurring in human egg cells.

Egg cells rely on zinc for their most vital functions, from maturing into a healthy egg to developing into an embryo. The 2014 research showed that each human egg cell comprises around 8,000 zinc compartments, each of which contain a million zinc atoms. When sperm meets an egg — although, as the human experiments demonstrated, all you really need is a sperm enzyme — the egg releases a flurry of zinc atoms all at once. Under the researcher's fluorescent sensor, it looks like a microscopic fireworks show, studded with explosion after tiny explosion. The spectacle lasts for up to two hours after conception.

Good Egg, Bad Egg

This isn't just cool; it's incredibly useful. The size of these explosions tells scientists a lot about the viability of the embryo the egg will produce. For people going through the costly and emotional ordeal of in vitro fertilization (IVF), that's big news, since at the moment less than half of IVF treatments result in a live birth.

"There are no tools currently available that tell us if it's a good quality egg," said co-author Dr. Eve Feinberg. "Often we don't know whether the egg or embryo is truly viable until we see if a pregnancy ensues. That's the reason this is so transformative. If we have the ability up front to see what is a good egg and what's not, it will help us know which embryo to transfer, avoid a lot of heartache and achieve pregnancy much more quickly." If that doesn't light up your life, we don't know what will.

175
The idea that a machine might be able to read and translate your thoughts may sound like science fiction. Buckle up, buddy. You're in the future now. Researchers have found a way to read a bird's brain and predict what song it's going to sing next.

University of California, San Diego scientists announced they were able to "decode realistic synthetic birdsong directly from neural activity." The team claims this is the first prototype of a decoder for complex, natural communication signals from neural activity. And it's the first step towards some pretty ambitious goals.

The Bird's Words

The experiment used machine learning to "decode" the pattern of neural firing in the zebra finch, a songbird that learns its language from older birds – much like how humans learn language from adults. The researchers analyzed both the pattern of neural firing and the actual song that resulted, including its stops and starts and changing frequencies. The idea was to train their software to match the brain activity to the sound produced, in what they termed "neural-to-song spectrum mappings."

"We have demonstrated a [Brain Interface Machine] for a complex communication signal, using an animal model for human speech," the researchers wrote. But why birds? In 2014, the results of a massive, four-year genetic research study published in a package of articles in "Science" confirmed that bird songs are closely related to human speech. The research implications were that "you can study in song birds and test their function in a way you can't do in humans," Erich Jarvis – one of the lead researchers of the international effort – told Scientific American.

This could be the first step towards developing a brain-to-text interface, but what are the implications? First, there's economic incentive from Silicon Valley: both Elon Musk and Mark Zuckerberg have announced that they're working on Brain Interface Machines (BMIs) to allow people to do things like text or tweet directly from their brains, with Musk launching brain-interface company Neuralink and Zuckerberg launching research lab Building 8 in early 2016 to research such technologies. Second, as UC San Diego's report states, this development can assist in the advancement of biomedical speech-prosthetic devices for patients without a voice.

Finding A Voice

To better understand the medical application, it helps to first know how prosthetics work. Dr. Sliman Bensmaia, neuroscientist and principal investigator at the University of Chicago, recently joined us on the Curiosity Podcast to explain how they work and how he and other researchers are developing more advanced prosthetics with a realistic sense of touch.

Bensmaia works on two different types of prosthetics. "For amputees, where part of the arm is missing and part of the arm is still there, the nerve that used to enervate the hand is still there," he explained. "When you electrically stimulate that nerve and creative patterns of neural activation in that nerve, you basically wake that nerve up. That evokes a sensation in the amputee of something touching his hand that is no longer there." The science behind using this type of prosthetic is different from the science behind the BMIs utilized in other types.

"The other type of prosthesis is directed at people who are tetraplegic, so they're paralyzed and insensate from the neck down. That means the nerve is still there, but it's no longer attached to the brain," Bensmaia continued. "So you can stimulate the nerve, but it doesn't have any consequence. For those patients, the only solution is to interface directly with the brain." And to do that, researchers have to learn how to decode the neural code – one experiment at a time.

"We present stimuli, we record the neural activity, and then we try to understand how information is encoded in those neural signals," Bensmaia told us. "If we do our job right, you can give me a pattern of neural activation and I can tell you a lot about the thing that was touched." This is what the UC San Diego team accomplished when they were able to "decode realistic synthetic birdsong directly from neural activity." Researcher Makoto Fukushima told MIT Technology Review that the richer range of birdsong is why the new results have "important implications for application in human speech."

I'm Thinking Of A Number...


Current brain-machine interfaces can track neural signals that reflect a person's imagined arm movements, allowing users to move a robot or a cursor on a screen. "BMIs hold promise to restore impaired motor function and, because they decode neural signals to infer behavior, can serve as powerful tools to understand the neural mechanisms of motor control," the full report explains.

"Yet complex behaviors, such as vocal communication, exceed state-of-the-art decoding technologies which are currently restricted to comparatively simple motor actions. Here we present a BMI for birdsong, that decodes a complex, learned vocal behavior directly from neural activity." So while the idea of a helmet or brain implant that can effortlessly pick up what you're trying to say remains pretty far from being realized, this research shows that it's not strictly impossible.

176
If all the world had were sticks and stones, the guy with the machine gun would reign supreme. That's essentially the situation with the arrival of quantum computers. They're so powerful that it takes them mere hours to solve problems that would take modern computers years to work through. That means that the moment the first quantum computer turns on, encrypted data across the internet is pretty much up for grabs. That is, unless we do something about it.

I Am The Gatekeeper. Are You The Keymaster?

If you want to send a secret message — whether it's military intelligence about enemy troops or your credit card number to buy a toaster online — you have to encrypt it. Encryption is a way of encoding a message so that only authorized parties can read it and nobody can eavesdrop on the transmission. To encrypt something, you need a cipher, which is an algorithm that converts the message into a scrambled mess of characters that can only be turned back into the original message using a special key.

The internet these days uses two types of encryption. Symmetric-key cryptography is the oldest; it uses a single key to both encrypt and decrypt the message. Say Beyoncé and Jay-Z wanted to exchange secret messages. With a symmetric-key system, Beyoncé and Jay-Z would meet beforehand and agree on a secret key they can use later to send messages back and forth without anyone else being able to read them.

But meeting beforehand isn't always possible, and anyway, what if someone (ahem, Becky with the good hair) was eavesdropping on that secret meeting? That's why we have public-key cryptography. It's a type of asymmetric-key algorithm, since it uses more than one key, and it's the bread and butter of modern online commerce.

In this case, Beyoncé might tell Jay-Z publicly how to encode a message to her, but only she would know how to decode it. This works because some mathematical processes are easy to do, but hard to undo. For example, Jay-Z could multiply two large whole numbers and send the result to Beyoncé, but an eavesdropper (Becky!) would have a hard time figuring out the original numbers from that final result. In the real world, the difficult math problems that public-key systems rely on are called hidden subgroup problems.
Quantum Is Coming

So here's the thing: experts predict that once quantum computers are up and running, they'll be able to solve hidden subgroup problems in no time. That's because while traditional computers manipulate every particle of information, or "bit", as either an 0 or a 1, quantum bits or "qbits" can exist as 0, 1, and all points in between. That makes quantum computers millions of times more powerful than the computers that created those encryption algorithms.

Nobody has created a quantum computer that can do anything of real importance yet, but it's reasonable to assume they'll be here sometime after 2025. And like any technology, we won't switch from traditional to quantum computing in one fell swoop — it'll be gradual, and that leaves traditional cryptography at risk. Even secure information from the past is at stake. "An attacker can record our secure communication today and break it with a quantum computer years later. All of today's secrets will be lost," warns Tanja Lange, professor of Cryptology at Eindhoven University of Technology.

But there's hope. Lange leads the research consortium on Post-Quantum Cryptography, or PQCrypto, which combines the intellectual prowess of 11 different universities and companies to come up with new ways of encrypting data, hopefully without the use of hidden subgroup problems. It only started in 2015, and Lange says it can take up to 20 years after development for a new cryptographic technique to reach the end user. But they may have no other choice. Quantum computing is coming, and we have to be ready when it does.


177
Early dawn falls on a city street that almost seems abandoned — until you notice a slight motion in the shadows. A lone figure shuffles across the pavement, and soon, he's joined by another, and then another. They mindlessly lurch forward in relentless pursuit of their target: you, the clerk at the coffee shop. This isn't a zombie story, but those unfortunate souls who have to pull themselves out of bed so early? They might still be headed for an early grave.

Why All-Nighters are Nail-Biters


So everybody knows how much it sucks when you have to pull an all-nighter. But just take a nap during the day, and then get a couple days' worth of good sleep, and you'll be back on track, right? Er, no. According to ER doctor Andrew Herring, "A single night shift has cognitive effects going out for a week." And it's not just doctors (and their patients) who are suffering. Industrial accidents occur disproportionately before dawn as well. But perhaps most surprisingly, it's not just about cognitive ability. Workers assigned to the night shift are at a higher risk for obesity, diabetes, and even cancer.

One study on a group of particularly beleaguered laboratory mice highlighted exactly how crucial a reliable sleep schedule is to our health. Over the course of eight weeks, the lights in the room were dimmed and raised in sync with dawn and dusk for six days a week — but on the seventh, they were turned on a full six hours early. The six days of normal light did nothing to offset the shock the mice suffered. The young mice demonstrated clear signs of mental instability, and as for the older mice? A full 53 percent of them simply died.
Getting Back on Track

The problem, in this case, is the lack of a reliable sleep schedule. Although night shifts are inarguably worse for your health than day shifts, the real danger is trying to switch back and forth. Of course, you don't have to have such a severe shock as having to work a completely different shift for your schedule to get thrown completely off-kilter.

Getting up early for work on some days (especially if you aren't in the habit of getting to bed earlier) can cause negative health effects as well. So can shaking up your eating schedule, which throws off your blood sugar production without changing how your body is accustomed to burning it. And then there's the pesky problem of light — besides watching TV late into the night, we all carry around devices that we use to shine a bright white beam directly into our eyes at all hours of the day.

So what is there to do about it? Well, if you've got a schedule-shakeup coming on your calendar, try easing yourself into it by gradually shifting your sleep habits to fit your needs ahead of time, and give yourself time to adjust after you make the switch as well (also, try to keep that new schedule as long as possible, instead of switching back and forth). And shift your meals gradually as well, in order to keep all of your internal clocks in sync with each other as well as with the world around them.

When it comes to light, the only thing to do about it is to be aware of what you're exposing yourself to. Keep lights in your house dim after dark and try to limit your phone usage a couple of hours before bed. And here's one more tip from Susan Golden, director of the UCSD Center of Circadian Biology: when she and her husband watch TV late at night, they do so through orange-colored sunglasses. That keeps their exposure to the noon-esque blue light from the TV to a minimum.

Pages: 1 ... 10 11 [12]