Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - ZER

Pages: [1] 2
1
EEE / 7 Shocking 3D Printed Things
« on: April 20, 2017, 02:36:09 PM »

2
EEE / 3D Printed House Technology- I Saw Ep.1
« on: April 20, 2017, 02:34:50 PM »

4
EEE / 3D Metal Printing | 3D Printing Technologies
« on: April 20, 2017, 02:29:34 PM »

6
EEE / Metamaterial Mechanisms (UIST'16)
« on: April 20, 2017, 02:27:23 PM »

7
EEE / Invisibility Breakthrough for Japanese Researchers
« on: April 20, 2017, 02:26:57 PM »

8
EEE / Metamaterials: The Next Photonics Revolution
« on: April 20, 2017, 02:26:10 PM »

9
EEE / Researchers “iron out” graphene’s wrinkles
« on: April 20, 2017, 01:25:04 PM »
From an electron’s point of view, graphene must be a hair-raising thrill ride. For years, scientists have observed that electrons can blitz through graphene at velocities approaching the speed of light, far faster than they can travel through silicon and other semiconducting materials.
Graphene, therefore, has been touted as a promising successor to silicon, with the potential to enable faster, more efficient electronic and photonic devices.
But manufacturing pristine graphene — a single, perfectly flat, ultrathin sheet of carbon atoms, precisely aligned and linked together like chickenwire — is extremely difficult. Conventional fabrication processes often generate wrinkles, which can derail an electron’s bullet-train journey, significantly limiting graphene’s electrical performance.
Now engineers at MIT have found a way to make graphene with fewer wrinkles, and to iron out the wrinkles that do appear. After fabricating and then flattening out the graphene, the researchers tested its electrical conductivity. They found each wafer exhibited uniform performance, meaning that electrons flowed freely across each wafer, at similar speeds, even across previously wrinkled regions.
In a paper published today in the Proceedings of the National Academy of Sciences, the researchers report that their techniques successfully produce wafer-scale, “single-domain” graphene — single layers of graphene that are uniform in both atomic arrangement and electronic performance.
“For graphene to play as a main semiconductor material for industry, it has to be single-domain, so that if you make millions of devices on it, the performance of the devices is the same in any location,” says Jeehwan Kim, the Class of 1947 Career Development Assistant Professor in the departments of Mechanical Engineering and Materials Science and Engineering at MIT. “Now we can really produce single-domain graphene at wafer scale.”
Kim’s co-authors include Sanghoon Bae, Samuel Cruz, and Yunjo Kim from MIT, along with researchers from IBM, the University of California at Los Angeles, and Kyungpook National University in South Korea.
A patchwork of wrinkles
The most common way to make graphene involves chemical vapor deposition, or CVD, a process in which carbon atoms are deposited onto a crystalline substrate such as copper foil. Once the copper foil is evenly coated with a single layer of carbon atoms, scientists submerge the entire thing in acid to etch away the copper. What remains is a single sheet of graphene, which researchers then pull out from the acid.
The CVD process can produce relatively large, macroscropic wrinkles in graphene, due to the roughness of the underlying copper itself and the process of pulling the graphene out from the acid. The alignment of carbon atoms is not uniform across the graphene, creating a “polycrystalline” state in which graphene resembles an uneven, patchwork terrain, preventing electrons from flowing at uniform rates.
In 2013, while working at IBM, Kim and his colleagues developed a method to fabricate wafers of single-crystalline graphene, in which the orientation of carbon atoms is exactly the same throughout a wafer.
Rather than using CVD, his team produced single-crystalline graphene from a silicon carbide wafer with an atomically smooth surface, albeit with tiny, step-like wrinkles on the order of several nanometers. They then used a thin sheet of nickel to peel off the topmost graphene from the silicon carbide wafer, in a process called layer-resolved graphene transfer.
Ironing charges
In their new paper, Kim and his colleagues discovered that the layer-resolved graphene transfer irons out the steps and tiny wrinkles in silicon carbide-fabricated graphene. Before transferring the layer of graphene onto a silicon wafer, the team oxidized the silicon, creating a layer of silicon dioxide that naturally exhibits electrostatic charges. When the researchers then deposited the graphene, the silicon dioxide effectively pulled graphene’s carbon atoms down onto the wafer, flattening out its steps and wrinkles.
Kim says this ironing method would not work on CVD-fabricated graphene, as the wrinkles generated through CVD are much larger, on the order of several microns.
“The CVD process creates wrinkles that are too high to be ironed out,” Kim notes. “For silicon carbide graphene, the wrinkles are just a few nanometers high, short enough to be flattened out.”
To test whether the flattened, single-crystalline graphene wafers were single-domain, the researchers fabricated tiny transistors on multiple sites on each wafer, including across previously wrinkled regions.
“We measured electron mobility throughout the wafers, and their performance was comparable,” Kim says. “What’s more, this mobility in ironed graphene is two times faster. So now we really have single-domain graphene, and its electrical quality is much higher [than graphene-attached silicon carbide].”
Kim says that while there are still challenges to adapting graphene for use in electronics, the group’s results give researchers a blueprint for how to reliably manufacture pristine, single-domain, wrinkle-free graphene at wafer scale.
“If you want to make any electronic device using graphene, you need to work with single-domain graphene,” Kim says. “There’s still a long way to go to make an operational transistor out of graphene. But we can now show the community guidelines for how you can make single-crystalline, single-domain graphene.”
Full Story: http://news.mit.edu/2017/iron-out-graphene-wrinkles-conductive-wafers-0403

10
When we visit a friend or go to the beach, our brain stores a short-term memory of the experience in a part of the brain called the hippocampus. Those memories are later “consolidated” — that is, transferred to another part of the brain for longer-term storage.
A new MIT study of the neural circuits that underlie this process reveals, for the first time, that memories are actually formed simultaneously in the hippocampus and the long-term storage location in the brain’s cortex. However, the long-term memories remain “silent” for about two weeks before reaching a mature state.
“This and other findings in this paper provide a comprehensive circuit mechanism for consolidation of memory,” says Susumu Tonegawa, the Picower Professor of Biology and Neuroscience, the director of the RIKEN-MIT Center for Neural Circuit Genetics at the Picower Institute for Learning and Memory, and the study’s senior author.
The findings, which appear in Science on April 6, may force some revision of the dominant models of how memory consolidation occurs, the researchers say.
The paper’s lead authors are research scientist Takashi Kitamura, postdoc Sachie Ogawa, and graduate student Dheeraj Roy. Other authors are postdocs Teruhiro Okuyama and Mark Morrissey, technical associate Lillian Smith, and former postdoc Roger Redondo.
Long-term storage
Beginning in the 1950s, studies of the famous amnesiac patient Henry Molaison, then known only as Patient H.M., revealed that the hippocampus is essential for forming new long-term memories. Molaison, whose hippocampus was damaged during an operation meant to help control his epileptic seizures, was no longer able to store new memories after the operation. However, he could still access some memories that had been formed before the surgery.
This suggested that long-term episodic memories (memories of specific events) are stored outside the hippocampus. Scientists believe these memories are stored in the neocortex, the part of the brain also responsible for cognitive functions such as attention and planning.
Neuroscientists have developed two major models to describe how memories are transferred from short- to long-term memory. The earliest, known as the standard model, proposes that short-term memories are initially formed and stored in the hippocampus only, before being gradually transferred to long-term storage in the neocortex and disappearing from the hippocampus.
A more recent model, the multiple trace model, suggests that traces of episodic memories remain in the hippocampus. These traces may store details of the memory, while the more general outlines are stored in the neocortex.
Until recently, there has been no good way to test these theories. Most previous studies of memory were based on analyzing how damage to certain brain areas affects memories. However, in 2012, Tonegawa’s lab developed a way to label cells called engram cells, which contain specific memories. This allows the researchers to trace the circuits involved in memory storage and retrieval. They can also artificially reactivate memories by using optogenetics, a technique that allows them to turn target cells on or off using light.
In the new Science study, the researchers used this approach to label memory cells in mice during a fear-conditioning event — that is, a mild electric shock delivered when the mouse is in a particular chamber. Then, they could use light to artificially reactivate these memory cells at different times and see if that reactivation provoked a behavioral response from the mice (freezing in place). The researchers could also determine which memory cells were active when the mice were placed in the chamber where the fear conditioning occurred, prompting them to naturally recall the memory.
The researchers labeled memory cells in three parts of the brain: the hippocampus, the prefrontal cortex, and the basolateral amygdala, which stores memories’ emotional associations.
Just one day after the fear-conditioning event, the researchers found that memories of the event were being stored in engram cells in both the hippocampus and the prefrontal cortex. However, the engram cells in the prefrontal cortex were “silent” — they could stimulate freezing behavior when artificially activated by light, but they did not fire during natural memory recall.
“Already the prefrontal cortex contained the specific memory information,” Kitamura says. “This is contrary to the standard theory of memory consolidation, which says that you gradually transfer the memories. The memory is already there.”
Over the next two weeks, the silent memory cells in the prefrontal cortex gradually matured, as reflected by changes in their anatomy and physiological activity, until the cells became necessary for the animals to naturally recall the event. By the end of the same period, the hippocampal engram cells became silent and were no longer needed for natural recall. However, traces of the memory remained: Reactivating those cells with light still prompted the animals to freeze.
In the basolateral amygdala, once memories were formed, the engram cells remained unchanged throughout the course of the experiment. Those cells, which are necessary to evoke the emotions linked with particular memories, communicate with engram cells in both the hippocampus and the prefrontal cortex.
Theory revision
The findings suggest that traditional theories of consolidation may not be accurate, because memories are formed rapidly and simultaneously in the prefrontal cortex and the hippocampus on the day of training.
“They’re formed in parallel but then they go different ways from there. The prefrontal cortex becomes stronger and the hippocampus becomes weaker,” Morrissey says.
“This paper shows clearly that from the get-go, engrams are formed in the prefrontal cortex,” says Paul Frankland, a principal investigator in the Neurobiology Laboratory at the Hospital for Sick Children in Toronto, who was not involved in the study. “It challenges the notion that there’s a movement of the memory trace from the hippocampus to the cortex, and makes the point that these circuits are engaged together at the same time. As the memories age, there’s a shift in the balance of which circuit is engaged as a memory is recalled.”
Further studies are needed to determine whether memories fade completely from hippocampal cells or if some traces remain. Right now, the researchers can only monitor engram cells for about two weeks, but they are working on adapting their technology to work for a longer period.
Kitamura says he believes that some trace of memory may stay in the hippocampus indefinitely, storing details that are retrieved only occasionally. “To discriminate two similar episodes, this silent engram may reactivate and people can retrieve the detailed episodic memory, even at very remote time points,” he says.
The researchers also plan to further investigate how the prefrontal cortex engram maturation process occurs. This study already showed that communication between the prefrontal cortex and the hippocampus is critical, because blocking the circuit connecting those two regions prevented the cortical memory cells from maturing properly.
The research was funded by the RIKEN Brain Science Institute, the Howard Hughes Medical Institute, and the JPB Foundation.
Full Story: http://news.mit.edu/2017/neuroscientists-identify-brain-circuit-necessary-memory-formation-0406

11
EEE / Nanoparticles open new window for biological imaging
« on: April 20, 2017, 12:02:36 PM »
For certain frequencies of short-wave infrared light, most biological tissues are nearly as transparent as glass. Now, researchers have made tiny particles that can be injected into the body, where they emit those penetrating frequencies. The advance may provide a new way of making detailed images of internal body structures such as fine networks of blood vessels.
The new findings, based on the use of light-emitting particles called quantum dots, is described in a paper in the journal Nature Biomedical Engineering, by MIT research scientist Oliver Bruns, recent graduate Thomas Bischof PhD ’15, professor of chemistry Moungi Bawendi, and 21 others.
Near-infrared imaging for research on biological tissues, with wavelengths between 700 and 900 nanometers (billionths of a meter), is widely used, but wavelengths of around 1,000 to 2,000 nanometers have the potential to provide even better results, because body tissues are more transparent to that light. “We knew that this imaging mode would be better” than existing methods, Bruns explains, “but we were lacking high-quality emitters” — that is, light-emitting materials that could produce these precise wavelengths.
Light-emitting particles have been a specialty of Bawendi, the Lester Wolf Professor of Chemistry, whose lab has over the years developed new ways of making quantum dots. These nanocrystals, made of semiconductor materials, emit light whose frequency can be precisely tuned by controlling the exact size and composition of the particles.
The key was to develop versions of these quantum dots whose emissions matched the desired short-wave infrared frequencies and were bright enough to then be easily detected through the surrounding skin and muscle tissues. The team succeeded in making particles that are “orders of magnitude better than previous materials, and that allow unprecedented detail in biological imaging,” Bruns says. The synthesis of these new particles was initially described in a paper by graduate student Daniel Franke and others from the Bawendi group in Nature Communications last year.
The quantum dots the team produced are so bright that their emissions can be captured with very short exposure times, he says. This makes it possible to produce not just single images but video that captures details of motion, such as the flow of blood, making it possible to distinguish between veins and arteries.
The new light-emitting particles are also the first that are bright enough to allow imaging of internal organs in mice that are awake and moving, as opposed to previous methods that required them to be anesthetized, Bruns says. Initial applications would be for preclinical research in animals, as the compounds contain some materials that are unlikely to be approved for use in humans. The researchers are also working on developing versions that would be safer for humans.
The method also relies on the use of a newly developed camera that is highly sensitive to this particular range of short-wave infrared light. The camera is a commercially developed product, Bruns says, but his team was the first customer for the camera’s specialized detector, made of indium-gallium-arsenide. Though this camera was developed for research purposes, these frequencies of infrared light are also used as a way of seeing through fog or smoke.
Not only can the new method determine the direction of blood flow, Bruns says, it is detailed enough to track individual blood cells within that flow. “We can track the flow in each and every capillary, at super high speed,” he says. “We can get a quantitative measure of flow, and we can do such flow measurements at very high resolution, over large areas.”
Such imaging could potentially be used, for example, to study how the blood flow pattern in a tumor changes as the tumor develops, which might lead to new ways of monitoring disease progression or responsiveness to a drug treatment. “This could give a good indication of how treatments are working that was not possible before,” he says.
“This is an exciting and potentially revolutionary development for small animal imaging,” says Guillermo Tearney, a professor of pathology at Harvard Medical School, who was not involved in this work. “By using probes that are tuned to wavelengths further out in the short-wave near-infrared, the investigators overcome scattering, which is the major phenomenon” limiting such in vivo microscopy, he says.
“In so doing, simpler and less invasive interrogation methods can be utilized to understand structure and function in animal models, on both the organ and cellular level,” Tearney says. “I anticipate that these probes will have a major impact on the field of intravital [done with living subjects] imaging and bioscience research.”
The team included members from MIT’s departments of Chemistry, Chemical Engineering, Biological Engineering, and Mechanical Engineering, as well as from Harvard Medical School, the Harvard T.H. Chan School of Public Health, Raytheon Vision Systems, and University Medical Center in Hamburg, Germany. The work was supported by the National Institutes of Health, the National Cancer Institute, the National Foundation for Cancer Research, the Warshaw Institute for Pancreatic Cancer Research, the Massachusetts General Hospital Executive Committee on Research, the Army Research Office through the Institute for Soldier Nanotechnologies at MIT, the U.S. Department of Defense, and the National Science Foundation.
Full Story: http://news.mit.edu/2017/nanoparticles-quantum-dots-biological-imaging-0410
http://news.mit.edu/2017/nanoparticles-quantum-dots-biological-imaging-0410

12
EEE / Engineers Build Robot Drone That Mimics Bat Flight
« on: April 20, 2017, 12:00:33 PM »
Bats have long captured the imaginations of scientists and engineers with their unrivaled agility, but their complex wing motions pose significant technological challenges for those seeking to recreate their flight in a robot.

The key flight mechanisms of bats now have been recreated with unprecedented fidelity in the Bat Bot—a self-contained robotic bat with soft, articulated wings, developed by researchers at Caltech and the University of Illinois at Urbana-Champaign (UIUC).

"This robot design will help us build safer and more efficient flying robots, and also give us more insight into the way bats fly," says Soon-Jo Chung, associate professor of aerospace and Bren Scholar in the Division of Engineering and Applied Science at Caltech, and Jet Propulsion Laboratory research scientist. (Caltech manages JPL for NASA.)

Chung, who joined the Caltech faculty in August 2016, developed the robotic bat, along with his former postdoctoral associate Alireza Ramezani from UIUC and Seth Hutchinson, a professor of electrical and computer engineering at the UIUC and Ramezani's co-advisor. Chung is the corresponding author of a paper describing the bat that was published on February 1 in Science Robotics, the newest member of the Science family of journals published by the American Association for the Advancement of Science.

The Bat Bot weighs only 93 grams and is shaped like a bat with a roughly one-foot wingspan. It is capable of altering its wing shape by flexing, extending, and twisting at its shoulders, elbows, wrists, and legs. Arguably, bats have the most sophisticated powered flight mechanism among animals, which includes wings that have the capability of changing shape. Their flight mechanism involves several different types of joints that interlock the bones and muscles to one another, creating a musculoskeletal system that is capable of movement in more than 40 rotational directions.

"Our work demonstrates one of the most advanced designs to date of a self-contained flapping-winged aerial robot with bat morphology that is able to perform autonomous flight," Ramezani says.

One of the key challenges was to create wings that change shape while flapping, the way a biological bat's do. Conventional lightweight fabrics, like nylon and Mylar, are not stretchable enough. Instead, the researchers developed a custom ultra-thin (56 microns), silicone-based membrane that simulates stretchable, thin bat wings.

Bat-inspired aerial robots have the potential to be significantly more energy efficient than current flying robots because their flexible wings amplify the motion of the robot's actuators. When a bat—or the Bat Bot—flaps its wings, the wing membranes fill up with air and deform. At the end of the wings' downward flapping motion, the membranes snap back to their usual shape and blast out the air, creating a huge amplification in power for the flap.

The design has potential applications for environments where more traditional quadrotor drones—which have four spinning rotors—could collide into objects or people, causing damage or injury.

The study is titled "A Biomimetic Robotic Platform to Study Flight Specializations of Bats." This research was funded by the National Science Foundation's National Robotics Initiative.
http://www.caltech.edu/news/engineers-build-robot-drone-mimics-bat-flight-53794

13
Caltech has long brought together masters from many fields to create the unimaginable. In the field of silicon photonics, in particular, the open and collaborative culture of the Division of Engineering and Applied Science has allowed for the assembly of an incredible orchestra of scientific expertise. Silicon (Si) wafers have for decades provided the instruments for this orchestra, but recently, a flood of integrated-systems research has led to applications that have completely transformed these instruments—and this, in turn, has impacted the life of almost every human on the planet.

There are many conductors of this orchestra, but ENGenious sat down with four of them: Professors Azita Emami, Ali Hajimiri, Kerry Vahala, and Amnon Yariv. They shared the history of the field, their current work in it, and some incredible potential applications, ranging from 3-D cameras on T-shirts to biosensors under the skin to planetary imagers.

Full Story: http://www.caltech.edu/news/photons-electrons-silicon-caltech-s-electric-light-orchestra-54160

14
EEE / Computing with Biochemical Circuits Made Easy
« on: April 20, 2017, 11:59:13 AM »
Electronic circuits are found in almost everything from smartphones to spacecraft and are useful in a variety of computational problems from simple addition to determining the trajectories of interplanetary satellites. At Caltech, a group of researchers led by Assistant Professor of Bioengineering Lulu Qian is working to create circuits using not the usual silicon transistors but strands of DNA.

The Qian group has made the technology of DNA circuits accessible to even novice researchers—including undergraduate students—using a software tool they developed called the Seesaw Compiler. Now, they have experimentally demonstrated that the tool can be used to quickly design DNA circuits that can then be built out of cheap "unpurified" DNA strands, following a systematic wet-lab procedure devised by Qian and colleagues.

A paper describing the work appears in the February 23 issue of Nature Communications.

Although DNA is best known as the molecule that encodes the genetic information of living things, they are also useful chemical building blocks. This is because the smaller molecules that make up a strand of DNA, called nucleotides, bind together only with very specific rules—an A nucleotide binds to a T, and a C nucleotide binds to a G. A strand of DNA is a sequence of nucleotides and can become a double strand if it binds with a sequence of complementary nucleotides.

DNA circuits are good at collecting information within a biochemical environment, processing the information locally and controlling the behavior of individual molecules. Circuits built out of DNA strands instead of silicon transistors can be used in completely different ways than electronic circuits. "A DNA circuit could add 'smarts' to chemicals, medicines, or materials by making their functions responsive to the changes in their environments," Qian says. "Importantly, these adaptive functions can be programmed by humans."

To build a DNA circuit that can, for example, compute the square root of a number between 0 and 16, researchers first have to carefully design a mixture of single and partially double-stranded DNA that can chemically recognize a set of DNA strands whose concentrations represent the value of the original number. Mixing these together triggers a cascade of zipping and unzipping reactions, each reaction releasing a specific DNA strand upon binding. Once the reactions are complete, the identities of the resulting DNA strands reveal the answer to the problem.

With the Seesaw Compiler, a researcher could tell a computer the desired function to be calculated and the computer would design the DNA sequences and mixtures needed. However, it was not clear how well these automatically designed DNA sequences and mixtures would work for building DNA circuits with new functions; for example, computing the rules that govern how a cell evolves by sensing neighboring cells, defined in a classic computational model called "cellular automata."

"Constructing a circuit made of DNA has thus far been difficult for those who are not in this research area, because every circuit with a new function requires DNA strands with new sequences and there are no off-the-shelf DNA circuit components that can be purchased," says Chris Thachuk, senior postdoctoral scholar in computing and mathematical sciences and second author on the paper. "Our circuit-design software is a step toward enabling researchers to just type in what they want to do or compute and having the software figure out all the DNA strands needed to perform the computation, together with simulations to predict the DNA circuit's behavior in a test tube. Even though these DNA strands are still not off-the-shelf products, we have now shown that they do work well for new circuits with user-designed functions."

"In the 1950s, only a few research labs that understood the physics of transistors could build early versions of electronic circuits and control their functions," says Qian. "But today many software tools are available that use simple and human-friendly languages to design complex electronic circuits embedded in smart machines. Our software is kind of like that: it translates simple and human-friendly descriptions of computation to the design of complex DNA circuits."

The Seesaw Compiler was put to the test in 2015 in a unique course at Caltech, taught by Qian and called "Design and Construction of Programmable Molecular Systems" (BE/CS 196 ab). "How do you evaluate the accessibility of a new technology? You give the technology to someone who is intellectually capable but has minimal prior background," Qian says.

"The students in this class were undergrads and first-year graduate students majoring in computer science and bioengineering," says Anupama Thubagere, a graduate student in biology and bioengineering and first author on the paper. "I started working with them as a head teaching assistant and together we soon discovered that using the Seesaw Compiler to design a DNA circuit was easy for everyone."

However, building the designed circuit in the wet lab was not so simple. Thus, with continued efforts after the class, the group set out to develop a systematic wet-lab procedure that could guide researchers—even novices like undergraduate students—through the process of building DNA circuits. "Fortunately, we found a general solution to every challenge that we encountered, now making it easy for everyone to build their own DNA circuits," Thubagere says.

The group showed that it was possible to use cheap, "unpurified" DNA strands in these circuits using the new process. This was only possible because steps in the systematic wet-lab procedure were designed to compensate for the lower synthesis quality of the DNA strands.

"We hope that this work will convince more computer scientists and researchers from other fields to join our community in developing increasingly powerful molecular machines and to explore a much wider range of applications that will eventually lead to the transformation of technology that has been promised by the invention of molecular computers," Qian says.

The paper is titled, "Compiler-aided systematic construction of large-scale DNA strand displacement circuits using unpurified components." Other Caltech co-authors include graduate students Robert Johnson and Kevin Cherry, alumnus Joseph Berleant (BS '16), and undergraduate Diana Ardelean. The work was funded by the National Science Foundation, the Banting Postdoctoral Fellowships program, the Burroughs Wellcome Fund, and Innovation in Education funds from Caltech.
Full Story: http://www.caltech.edu/news/computing-biochemical-circuits-made-easy-54206

15
EEE / Electrons Use DNA Like a Wire for Signaling DNA Replication
« on: April 20, 2017, 11:58:41 AM »
In the early 1990s, Jacqueline Barton, the John G. Kirkwood and Arthur A. Noyes Professor of Chemistry at Caltech, discovered an unexpected property of DNA—that it can act like an electrical wire to transfer electrons quickly across long distances. Later, she and her colleagues showed that cells take advantage of this trait to help locate and repair potentially harmful mutations to DNA.

Now, Barton's lab has shown that this wire-like property of DNA is also involved in a different critical cellular function: replicating DNA. When cells divide and replicate themselves in our bodies—for example in the brain, heart, bone marrow, and fingernails—the double-stranded helix of DNA is copied. DNA also copies itself in reproductive cells that are passed on to progeny.

The new Caltech-led study, based on work by graduate student Elizabeth O'Brien in collaboration with Walter Chazin's group at Vanderbilt University, shows that a key protein required for replicating DNA depends on electrons traveling through DNA.

"Nature is the best chemist and knows exactly how to take advantage of DNA electron-transport chemistry," says Barton, who is also the Norman Davidson Leadership Chair of Caltech's Division of Chemistry and Chemical Engineering.

"The electron transfer process in DNA occurs very quickly," says O'Brien, lead author of the study, appearing in the February 24 issue of Science. "It makes sense that the cell would utilize this quick-acting pathway to regulate DNA replication, which necessarily is a very rapid process."

The researchers found their first clue that DNA replication might involve the transport of electrons through the double helix by taking a closer look at the proteins involved. Two of the main players in DNA replication, critical at the start of the process, are the proteins DNA primase and DNA polymerase alpha. DNA primase typically binds to single-stranded, uncoiled DNA to begin the replication process. It creates a "primer" made of RNA to help DNA polymerase alpha start its job of copying the single strand of DNA to create a new segment of double-helical DNA.

DNA primase and DNA polymerase alpha molecules both contain iron-sulfur clusters. Barton and her colleagues previously discovered that these metal clusters are crucial for DNA electron transport in DNA repair. In DNA repair, specific proteins send electrons down the double helix to other DNA-bound repair proteins as a way to "test the line," so to speak, and make sure there are no mutations in the DNA. If there are mutations, the line is essentially broken, alerting the cell that mutations are in need of repair. The iron-sulfur clusters in the DNA repair proteins are responsible for donating and accepting traveling electrons.

Barton and her group wanted to know if the iron-sulfur clusters were doing something similar in the DNA-replication proteins.

"We knew the iron-sulfur clusters must be doing something in the DNA-replication proteins, otherwise why would they be there? Iron can damage the DNA, so nature would not have wanted the iron there were it not for a good reason," says Barton.

Through a series of tests in which mutations were introduced into the DNA primase protein, the researchers showed that this protein needs to be in an oxidized state—which means it has lost electrons—to bind tightly to DNA and participate in DNA electron transport. When the protein is reduced—meaning it has gained electrons—it does not bind tightly to DNA.

"The electronic state of the iron-sulfur cluster in DNA primase acts like an on/off switch to initiate DNA replication," says O'Brien.

What's more, the researchers demonstrated that electron transport through DNA plays a role in signaling DNA primase to leave the DNA strand. (Though DNA primase must bind to single-stranded DNA to kick off replication, the process cannot begin in earnest until the protein pops back off the strand).

The scientists propose that the DNA polymerase alpha protein, which sits on the double helix strand, sends electrons down the strand to DNA primase. DNA primase accepts the electrons, becomes reduced, and lets go of the DNA. This donation and acceptance of electrons is done with the help of the iron-sulfur clusters.

"You have to get the DNA primase off the DNA quickly—that really starts the whole replication process," says Barton. "It's a hand off of electrons from one cluster to the other through the DNA double helix."

Many proteins involved in DNA reactions also contain iron-sulfur clusters and may also play roles in DNA electron transport chemistry, Barton says. What began as a fundamental question 25 years ago about whether DNA could support migration of electrons continues to lead to new questions about the chemical workings of cells. "That's the wonder of basic research," she says. "You start with one question and the answer leads you to new questions and new areas."

The study, titled, "The [4Fe4S] Cluster of Human DNA Primase functions as a Redox Switch using DNA Charge Transport," was funded by the National Institutes of Health. The collaborative work also included Vanderbilt coauthors Marilyn Holt, Matthew Thompson, Lauren Salay, and Aaron Ehlinger.
Full Story: http://www.caltech.edu/news/electrons-use-dna-wire-signaling-dna-replication-54208

Pages: [1] 2