Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Rohan Sarker

Pages: [1] 2 3 ... 6
1
EEE / Spinning new materials in a thread
« on: May 09, 2018, 01:33:08 PM »
Researchers at MIT have succeeded in making a fine thread that functions as a diode, a device at the heart of modern electronics. This feat — made possible by a new approach to a type of fiber manufacturing known as fiber drawing — could open up possibilities for fabricating a wide variety of electronic and photonic devices within composite fibers, using a variety of materials.

Fiber-drawing techniques are used to produce the optical fibers behind much of today’s broadband communications, but these techniques have been limited to materials that can partially melt and stretch like taffy at the temperatures being used for drawing the fibers. The new work demonstrates a way of synthesizing new materials during the fiber-making process, including materials whose melting points are far higher than the temperatures used to process the fibers. The simple proof-of-concept demonstration carried out by the MIT researchers could open the door to a wide array of sophisticated devices based on composite fibers, they say.

The findings, part of a doctoral research project in materials science by Nicholas Orf, have been published in the journal Proceedings of the National Academy of Sciences. The paper was co-authored by Orf (now a postdoc at MIT); John Joannopoulos, the Francis Wright Davis Professor of Physics; Yoel Fink, associate professor; Marc Baldo, associate professor; Ofer Shapira, a research scientist in the Research Laboratory of Electronics; postdoc Fabien Sorin; and Sylvain Danto, who was a postdoc at the time. The work was carried out in Fink’s research group.

All previous work on fiber-drawing ended up with the same materials that were there to begin with, just in a different shape, Orf says, adding: “In this method, new materials are formed during the drawing process.”

Fiber drawing involves preparing a “preform” of materials, such as a large glass rod resembling an oversized model of the fiber to be produced. This preform is heated until it reaches a taffy-like consistency and then pulled into a thin fiber. The materials comprising the preform remain unchanged as its dimensions are drastically reduced.

In the current research, the preform contained selenium, sulfur, zinc and tin, arranged within a coating of polymer material. The drawing process, carried out at a temperature of just 260 degrees Celsius (500 degrees Fahrenheit), combined these materials to form fibers containing zinc selenide, even though that compound has a melting point of 1,530 degrees Celsius (2,786 degrees Fahrenheit).

The resulting fiber was a simple but functional semiconductor device called a diode — a sort of one-way valve for electrical current, allowing electrons to flow through it in only one direction. The diode, never before made by such a method, is a basic building block for electrical circuits.

“This shows that many more kinds of materials can be incorporated into fibers than ever before,” Orf says. Because the physical arrangements placed in the preform are preserved in the drawn fiber, it should ultimately be possible to incorporate more complex electronic circuits within the structure of the fiber itself.

Such fibers might find uses as sensors for light, temperature or other environmental conditions, Orf says. Or the fibers could then be woven, such as to make a solar-cell fabric, he says.

Fink says his research group has been working for more than a decade on expanding the kinds of materials and structures that can be incorporated into fibers. He says that despite the rapid progress made in the last few decades in various forms of electronics, “there has been little progress in advancing the overall functionality and sophistication of fibers and fabrics … one of the earliest forms of human expression.”

The group’s research, he says, has stemmed from the basic question, “How sophisticated can a fiber be?” Over the years they have incorporated more and more materials, structures and functions into fibers. But one of the biggest limitations has been the set of materials that could be incorporated into the fibers; this new work has greatly expanded that list. The work shows that it is possible, Fink says, “to use the fiber draw as a way to synthesize new materials. It’s the first time this has been demonstrated anywhere.”

Zinc selenide, the specific compound formed in this drawing process, is an important material for both its electronic and its optical properties, Orf says. Such fibers might have uses in new photonic circuits, which use light beams to perform functions similar to those carried out by flowing electrons in electronic circuits.

While this experiment produced 15 individual diode devices in the fiber, each separate from the others, Fink says that through continuing research, “We think you could probably do hundreds” and even interconnect them to form electronic circuits.

Professor John Ballato, director of the Center for Optical Materials Science and Engineering Technologies at Clemson University, adds, “There has been growing international interest in semiconducting optical fibers over the past few years. Such fibers offer the potential to marry the optoelectronic benefits of semiconductors, [which] we know from the silicon photonics and integrated circuit worlds, with the light guidance and long path lengths of optical fibers.” The new MIT work is particularly significant, he says, because of “the utilization of the fiber as a micro solid-state chemical reactor to realize materials that are not generally amenable to direct fiber fabrication.”

Ballato, who was not involved in this research, adds that a similar technique has been used to produce reactions using gases, but that to the best of his knowledge, “this is the first … to extend this concept to the solid state, where indeed a more bountiful opportunity exists to achieve a wider range of materials.” The process is so flexible and has the potential to be used with such a range of materials, he says, that “it can be considered an important step to a ‘fiber that does everything’ — creates, propagates, senses and manipulates photons, electrons [and] phonons.”

The work was supported by the U.S. Army through the MIT Institute for Soldier Nanotechnologies and by the Materials Research Science and Engineering Center Program of the National Science Foundation.

2
EEE / Honing household helpers
« on: May 09, 2018, 01:32:27 PM »
Imagine a robot able to retrieve a pile of laundry from the back of a cluttered closet, deliver it to a washing machine, start the cycle and then zip off to the kitchen to start preparing dinner.

This may have been a domestic dream a half-century ago, when the fields of robotics and artificial intelligence first captured public imagination. However, it quickly became clear that even “simple” human actions are extremely difficult to replicate in robots. Now, MIT computer scientists are tackling the problem with a hierarchical, progressive algorithm that has the potential to greatly reduce the computational cost associated with performing complex actions.

Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering, and Tomás Lozano-Pérez, the School of Engineering Professor of Teaching Excellence and co-director of MIT’s Center for Robotics, outline their approach in a paper titled “Hierarchical Task and Motion Planning in the Now,” which they presented at the IEEE Conference on Robotics and Automation earlier this month in Shanghai.

Traditionally, programs that get robots to function autonomously have been split into two types: task planning and geometric motion planning. A task planner can decide that it needs to traverse the living room, but be unable to figure out a path around furniture and other obstacles. A geometric planner can figure out how to get to the phone, but not actually decide that a phone call needs to be made.

Of course, any robot that’s going to be useful around the house must have a way to integrate these two types of planning. Kaelbling and Lozano-Pérez believe that the key is to break the computationally burdensome larger goal into smaller steps, then make a detailed plan for only the first few, leaving the exact mechanisms of subsequent steps for later. “We’re introducing a hierarchy and being aggressive about breaking things up into manageable chunks,” Lozano-Pérez says. Though the idea of a hierarchy is not new, the researchers are applying an incremental breakdown to create a timeline for their “in the now” approach, in which robots follow the age-old wisdom of “one step at a time.”

The result is robots that are able to respond to environments that change over time due to external factors as well as their own actions. These robots “do the execution interleaved with the planning,” Kaelbling says.

The trick is figuring out exactly which decisions need to be made in advance, and which can — and should — be put off until later.

Sometimes, procrastination is a good thing

Kaelbling compares this approach to the intuitive strategies humans use for complex activities. She cites flying from Boston to San Francisco as an example: You need an in-depth plan for arriving at Logan Airport on time, and perhaps you have some idea of how you will check in and board the plane. But you don’t bother to plan your path through the terminal once you arrive in San Francisco, because you probably don’t have advance knowledge of what the terminal looks like — and even if you did, the locations of obstacles such as people or baggage are bound to change in the meantime. Therefore, it would be better — necessary, even — to wait for more information.

Why shouldn’t robots use the same strategy? Until now, most robotics researchers have focused on constructing complete plans, with every step from start to finish detailed in advance before execution begins. This is a way to maximize optimality — accomplishing the goal in the fewest number of movements — and to ensure that a plan is actually achievable before initiating it.

But the researchers say that while this approach may work well in theory and in simulations, once it comes time to run the program in a robot, the computational burden and real-world variability make it impractical to consider the details of every step from the get-go. “You have to introduce an approximation to get some tractability. You have to say, ‘Whichever way this works out, I’m going to be able to deal with it,’” Lozano-Pérez says.

Their approach extends not just to task planning, but also to geometric planning: Think of the computational cost associated with building a precise map of every object in a cluttered kitchen. In Kaelbling and Lozano-Pérez’s “in the now” approach, the robot could construct a rough map of the area where it will start — say, the countertop as a place for assembling ingredients. Later on in the plan — if it becomes clear that the robot will need a detailed map of the fridge’s middle shelf, to be able to reach for a jar of pickles, for example — it will refine its model as necessary, using valuable computation power to model only those areas crucial to the task at hand.

Finding the ‘sweet spot’

Kaelbling and Lozano-Pérez’s method differs from the traditional start-to-finish approach in that it has the potential to introduce suboptimalities in behavior. For example, a robot may pick up object ‘A’ to move it to a location ‘L,’ only to arrive at L and realize another object, ‘B,’ is already there. The robot will then have to drop A and move B before re-grasping A and placing it in L. Perhaps, if the robot had been able to “think ahead” far enough to check L for obstacles before picking up A, a few extra movements could have been avoided.

But, ultimately, the robot still gets the job done. And the researchers believe sacrificing some degree of behavior optimality is worth it to be able to break an extremely complex problem into doable steps. “In computer science, the trade-offs are everything,” Kaelbling says. “What we try to find is some kind of ‘sweet spot’ … where we’re trading efficiency of the actions in the world for computational efficiency.”

Citing the field’s traditional emphasis on optimal behavior, Lozano-Pérez adds, “We’re very consciously saying, ‘No, if you insist on optimality then it’s never going to be practical for real machines.’”

Stephen LaValle, a professor of computer science at the University of Illinois at Urbana-Champaign who was not affiliated with the work, says the approach is an attractive one. “Often in robotics, we have a tendency to be very analytical and engineering-oriented — to want to specify every detail in advance and make sure everything is going to work out and be accounted for,” he says. “[The researchers] take a more optimistic approach that we can figure out certain details later on in the pipeline,” and in doing so, reap a “benefit of efficiency of computational load.”

Looking to the future, the researchers plan to build in learning algorithms so robots will be better able to judge which steps are OK to put off, and which ones should be dealt with earlier in the process. To demonstrate this, Kaelbling returns to the travel example: “If you’re going to rent a car in San Francisco, maybe that’s something you do need to plan in advance,” she says, because putting it off might present a problem down the road — for instance, if you arrive to find the agencies have run out of rental cars.

Although “household helper” robots are an obvious — and useful — application for this kind of algorithm, the researchers say their approach could work in a number of situations, including supply depots, military operations and surveillance activities.

“So it’s not strictly about getting a robot to do stuff in your kitchen,” Kaelbling says. “Although that’s the example we like to think about — because everybody would be able to appreciate that.”

3
EEE / Layer upon layer
« on: May 09, 2018, 01:31:28 PM »
Graphene, a form of pure carbon arranged in a lattice just one atom thick, has interested countless researchers with its unique strength and its electrical and thermal conductivity. But one key property it lacks — which would make it suitable for a plethora of new uses — is the ability to form a band gap, needed for devices such as transistors, computer chips and solar cells.

Now, a team of MIT scientists has found a way to produce graphene in significant quantities in a two- or three-layer form. When the layers are arranged just right, these structures give graphene the much-desired band gap — an energy range that falls between the bands, or energy levels, where electrons can exist in a given material.

“It’s a breakthrough in graphene technology,” says Michael Strano, the Charles and Hilda Roddey Associate Professor of Chemical Engineering at MIT. The new work is described in a paper published this week in the journal Nature Nanotechnology, co-authored by graduate student Chih-Jen Shih, Professor of Chemical Engineering Daniel Blankschtein, Strano and 10 other students and postdocs.

Graphene was first proven to exist in 2004 (a feat that led to the 2010 Nobel Prize in physics), but making it in quantities large enough for anything but small-scale laboratory research has been a challenge. The standard method remains using adhesive tape to pick up tiny flakes of graphene from a block of highly purified graphite (the material of pencil lead) — a technique that does not lend itself to commercial-scale production.

The new method, however, can be carried out at a scale that opens up the possibility of real, practical applications, Strano says, and makes it possible to produce the precise arrangement of the layers — called A-B stacked, with the atoms in one layer centered over the spaces between atoms in the next — that yields desirable electronic properties.

“If you want a whole lot of bilayers that are A-B stacked, this is the only way to do it,” he says.

The trick takes advantage of a technique originally developed as far back as the 1950s and ’60s by MIT Institute Professor Mildred Dresselhaus, among others: Compounds of bromine or chlorine introduced into a block of graphite naturally find their way into the structure of the material, inserting themselves regularly between every other layer, or in some cases every third layer, and pushing the layers slightly farther apart in the process. Strano and his team found that when the graphite is dissolved, it naturally comes apart where the added atoms lie, forming graphene flakes two or three layers thick.

“Because this dispersion process can be very gentle, we end up with much larger flakes” than anyone has made using other methods, Strano says. “Graphene is a very fragile material, so it requires gentle processing.”

Such formations are “one of the most promising candidates for post-silicon nanoelectronics,” the authors say in their paper. The flakes produced by this method, as large as 50 square micrometers in area, are large enough to be useful for electronic applications, they say. To prove the point, they were able to manufacture some simple transistors on the material.

The material can now be used to explore the development of new kinds of electronic and optoelectronic devices, Strano says. And unlike the “Scotch tape” approach to making graphene, “our approach is industrially relevant,” Strano says.

James Tour, a professor of chemistry and of mechanical engineering and materials science at Rice University, who was not involved in this research, says the work involved “brilliant experiments” that produced convincing statistics. He added that further work would be needed to improve the yield of usable graphene material in their solutions, now at about 35 to 40 percent, to more than 90 percent. But once that is achieved, he says, “this solution-phase method could dramatically lower the cost of these unique materials and speed the commercialization of them in applications such as optical electronics and conductive composites.”

While it’s hard to predict how long it will take to develop this method to the point of commercial applications, Strano says, “it’s coming about at a breakneck pace.” A similar solvent-based method for making single-layer graphene is already being used to manufacture some flat-screen television sets, and “this is definitely a big step” toward making bilayer or trilayer devices, he says.

The work was supported by grants from the U.S. Office of Naval Research through a multi-university initiative that includes Harvard University and Boston University along with MIT, as well as from the Dupont/MIT Alliance, a David H. Koch fellowship, and the Army Research Office through the Institute for Soldier Nanotechnologies at MIT.

4
EEE / The math of the Rubik’s cube
« on: May 09, 2018, 01:30:34 PM »
Unfortunately, for cubes bigger than the standard Rubik’s cube — with, say, four or five squares to a row, rather than three — adequately canvassing starting positions may well be beyond the computational capacity of all the computers in the world. But in a paper to be presented at the 19th Annual European Symposium on Algorithms in September, researchers from MIT, the University of Waterloo and Tufts University establish the mathematical relationship between the number of squares in a cube and the maximum number of moves necessary to solve it. Their method of proof also provides an efficient algorithm for solving a cube that’s in its worst-case state.

Rubik's cubes
Photo: Dominick Reuter
Computer science is concerned chiefly with the question of how long algorithms take to execute, but computer scientists measure the answer to this question in terms of the number of elements the algorithm acts upon. The execution time of an algorithm that finds the largest number in a list, for instance, is proportional to the length of the list. A “dumb” algorithm for sorting the numbers in the list from smallest to largest, however, will have an execution time proportional to the square of the length of the list.

Computer science is concerned chiefly with the question of how long algorithms take to execute, but computer scientists measure the answer to this question in terms of the number of elements the algorithm acts upon. The execution time of an algorithm that finds the largest number in a list, for instance, is proportional to the length of the list. A “dumb” algorithm for sorting the numbers in the list from smallest to largest, however, will have an execution time proportional to the square of the length of the list.

Solution with a twist

Erik Demaine, an associate professor of computer science and engineering at MIT; his father, Martin Demaine, a visiting scientist at MIT’s Computer Science and Artificial Intelligence Laboratory; graduate student Sarah Eisenstat; Anna Lubiw, who was Demaine’s PhD thesis adviser at the University of Waterloo; and Tufts graduate student Andrew Winslow showed that the maximum number of moves required to solve a Rubik’s cube with N squares per row is proportional to N2/log N. “That that’s the answer, and not N2, is a surprising thing,” Demaine says.

The standard way to solve a Rubik’s cube, Demaine explains, is to find a square that’s out of position and move it into the right place while leaving the rest of the cube as little changed as possible. That approach will indeed yield a worst-case solution that’s proportional to N2. Demaine and his colleagues recognized that under some circumstances, a single sequence of twists could move multiple squares into their proper places, cutting down the total number of moves.

But finding a way to mathematically describe those circumstances, and determining how often they’d arise when a cube was in its worst-case state, was no easy task. “In the first hour, we saw that it had to be at least N2/log N,” Demaine says. “But then it was many months before we could prove that N2/log N was enough moves.”

Because their method of analysis characterizes the cases in which multiple squares can be moved into place simultaneously, it provides a way to recognize those cases, and thus an algorithm for solving a disordered cube. The algorithm isn’t quite optimal: It always requires a few extra moves. But as the number of squares per face increases, those moves dwindle in significance.

Go configure

The Rubik’s cube is an instance of what’s called a configuration problem, the best-known example of which involves finding the most efficient way to reorganize boxes stacked in a warehouse. It’s possible, Demaine says, that the tools he and his colleagues have developed for studying the Rubik’s cube could be adapted to such problems.

But Demaine is also a vocal defender of research that doesn’t have any obvious applications. “My life has been driven by solving problems that I consider fun,” he says. “It’s always hard to tell at the moment what is going to be important. Studying prime numbers was just a recreational activity. There was no practical importance to that for hundreds of years until cryptography came along.”

But, he adds, “the aesthetic is not just to look at things that are fun but also look at problems that are simple. I think the simpler the mathematical problem, the more likely that it’s going to arise in some important practical application in the future. And the Rubik’s cube is kind of the epitome of simplicity.”

“Erik is always very interested in extending the reach of popular mathematics,” says Marc van Kreveld, an associate professor in the Department of Information and Computing Sciences at Utrecht University in the Netherlands, who designs puzzles in his spare time. “That’s really one of the things that he tries to do, to bring across that mathematics is not just some boring area of study, but it’s actually fun, and you can do a lot with it, and it’s beautiful.”

“Erik’s a very brilliant person,” van Kreveld adds. “He is already very successful in his hard-core research. But the popularizing is also very necessary, I think. You should not underestimate the importance of motivating students to learn.”

5
EEE / The future of chip manufacturing
« on: May 09, 2018, 01:29:09 PM »
For 50 years, the transistors on computer chips have been getting smaller, and for 50 years, manufacturers have used the same technique — photolithography — to make their chips. But the very wavelength of visible light limits the size of the transistors that photolithography can produce. If chipmakers are to keep shrinking chip features, they’ll probably need to turn to other manufacturing methods.

Researchers have long used a technique called electron-beam (or e-beam) lithography to make prototype chips, but standard e-beam lithography is much slower than photolithography. Increasing its speed generally comes at the expense of resolution: Previously, the smallest chip features that high-speed e-beams could resolve were 25 nanometers across, barely better than the experimental 32-nanometer photolithography systems that several manufacturers have demonstrated. In a forthcoming issue of the journal Microelectronic Engineering, however, researchers at MIT’s Research Laboratory of Electronics (RLE) present a way to get the resolution of high-speed e-beam lithography down to just nine nanometers. Combined with other emerging technologies, it could point the way toward making e-beam lithography practical as a mass-production technique.

The most intuitive way for manufacturers to keep shrinking chip features is to switch to shorter wavelengths of light — what’s known in the industry as extreme ultraviolet. But that’s easier said than done. “Because the wavelength is so small, the optics [are] all different,” says Vitor Manfrinato, an RLE graduate student and first author on the new paper. “So the systems are much more complicated … [and] the light source is very inefficient.”

Dropping the mask

Visible-light, ultraviolet and e-beam lithography all use the same general approach. The materials that compose a chip are deposited in layers. Every time a new layer is laid down, it’s covered with a material called a resist. Much like a piece of photographic paper, the resist is exposed — to either light or a beam of electrons — in a carefully prescribed pattern. The unexposed resist and the material underneath are then etched away, while the exposed resist protects the material it covers. Repeating this process gradually builds up three-dimensional structures on the chip’s surface.

The main difference between e-beam lithography and photolithography is the exposure phase. In photolithography, light shines through a patterned stencil called a mask, striking the whole surface of the chip at once. With e-beam lithography, on the other hand, a beam of electrons scans across the surface of the resist, row by row, a more time-consuming operation.

One way to improve the efficiency of e-beam lithography is to use multiple electron beams at once, but there’s still the problem of how long a beam has to remain trained on each spot on the surface of the resist. That’s the problem the MIT researchers address.

Lowering the dose

The fewer electrons it takes to expose a spot on the resist, the faster the e-beam can move. But lowering the electron count means lowering the energy of the beam, and low-energy electrons tend to “scatter” more than high-energy electrons as they pass through the resist, spreading farther apart the deeper they go. To reduce scattering, e-beam systems generally use high-energy beams, but that requires resists tailored to larger doses of electrons.

Manfrinato, a member of RLE’s Quantum Nanostructures and Nanofabrication Group, and group leader Karl Berggren, the Emanuel E. Landsman (1958) Associate Professor of Electrical Engineering and Computer Science — together with professor of electrical engineering Henry Smith, graduate students Lin Lee Cheong and Donald Winston, and visiting student Huigao Duan, all of RLE — used two tricks to improve the resolution of high-speed e-beam lithography. The first was to use a thinner resist layer, to minimize electron scattering. The second was to use a solution containing ordinary table salt to “develop” the resist, hardening the regions that received slightly more electrons but not those that received slightly less.

Pieter Kruit, a professor of physics at the Delft University of Technology in the Netherlands and co-founder of Mapper, a company that has built lithographic systems with 110 parallel e-beams, says that in addition to being faster, e-beam systems that deliver smaller doses of electrons are much easier to build. The larger the dose of electrons, the more energy the system consumes, and the more insulation it requires between electrodes. “That takes so much space that it’s impossible to build an instrument,” Kruit says.

Kruit doubts manufacturers will use exactly the resist that the MIT researchers did in their experiments. Although the researchers’ goal was to find a resist that would respond to small doses of electrons, the one that they settled on is actually “a little bit too sensitive,” Kruit says: The amount of electricity that an electrode delivers to a chip surface will vary slightly, he explains, and if the resist is too sensitive to those variations, the width of the chip features will vary, too. “But that is a matter of modifying the resist slightly, and that’s what resist companies do all the time,” he adds.

6
EEE / Improving recommendation systems
« on: May 09, 2018, 01:26:59 PM »
Recommendation algorithms are a vital part of today’s Web, the basis of the targeted advertisements that account for most commercial sites’ revenues and of services such as Pandora, the Internet radio site that tailors song selections to listeners’ declared preferences. The DVD rental site Netflix deemed its recommendation algorithms important enough that it offered a million-dollar prize to anyone who could improve their predictions by 10 percent.

But Devavrat Shah, the Jamieson Career Development Associate Professor of Electrical Engineering and Computer Science in MIT’s Laboratory of Information and Decisions Systems, thinks that the most common approach to recommendation systems is fundamentally flawed. Shah believes that, instead of asking users to rate products on, say, a five-star scale, as Netflix and Amazon do, recommendation systems should ask users to compare products in pairs. Stitching the pairwise rankings into a master list, Shah argues, will offer a more accurate representation of consumers’ preferences.

In a series of papers (paper 1 | paper 2 | paper 3) published over the last few years, Shah, his students Ammar Ammar and Srikanth Jagabathula, and Vivek Farias, an associate professor at the MIT Sloan School of Management, have demonstrated algorithms that put that theory into practice. Besides showing how the algorithms can tailor product recommendations to customers, they’ve also built a website that uses the algorithms to help large groups make collective decisions. And at an Institute for Operations Research and Management Sciences conference in June, they presented a version of their algorithm that had been tested on detailed data about car sales collected over the span of a year by auto dealers around the country. Their algorithm predicted car buyers’ preferences with 20 percent greater accuracy than existing algorithms.

Calibration conundrum

One of the problems with basing recommendations on ratings, Shah explains, is that an individual’s rating scale will tend to fluctuate. “If my mood is bad today, I might give four stars, but tomorrow I’d give five stars,” he says. “But if you ask me to compare two movies, most likely I will remain true to that for a while.”

Similarly, ratings scales may vary between people. “Your three stars might be my five stars, or vice versa,” Shah says. “For that reason, I strongly believe that comparison is the right way to capture this.”

Moreover, Shah explains, anyone who walks into a store and selects one product from among the three displayed on a shelf is making an implicit comparison. So in many contexts, comparison data is actually easier to come by than ratings.

Shah believes that the advantages of using comparison as the basis for recommendation systems are obvious but that the computational complexity of the approach has prevented its wide adoption. The results of thousands — or millions — of pairwise comparisons could, of course, be contradictory: Some people may like "Citizen Kane" better than "The Godfather," but others may like "The Godfather" better than "Citizen Kane." The only sensible way to interpret conflicting comparisons is statistically. But there are more than three million ways to order a ranking of only 10 movies, and every one of them may have some probability, no matter how slight, of representing the ideal ordering of at least one ranker. Increase the number of movies to 20, and there are more ways to order the list than there are atoms in the universe.

Ordering out

So Shah and his colleagues make some assumptions that drastically reduce the number of possible orderings they have to consider. The first is simply to throw out the outliers. For example, Netflix’s movie-rental data assigns the Robin Williams vehicle "Patch Adams" the worst reviews, on average, of any film with a statistically significant number of ratings. So the MIT algorithm would simply disregard all the possible orderings in which "Patch Adams" ranked highly.

Even with the outliers eliminated, however, a large number of plausible orderings might remain. From that group, the MIT algorithm selects a subset: the smallest group of orderings that fit the available data. This approach can winnow an astronomically large number of orderings down to one that’s within the computational purview of a modern computer.

Finally, when the algorithm has arrived at a reduced number of orderings, it uses a movie’s rank in each of the orderings, combined with the probability of that ordering, to assign the movie an overall score. Those scores determine the final ordering.

Paat Rusmevichientong, an associate professor of information and operations management at the University of Southern California, thinks that the most interesting aspect of Shah’s work is the alternative it provides to so-called parametric models, which are more restrictive. These, he says, were “the state of the art up until 2008, when Professor Shah’s paper first came out.”

“They’ve really, substantially enlarged the class of choice models that you can work with,” Rusmevichientong says. “Before, people never thought that it was possible to have rich, complex choice models like this.”

The next step, Rusmevichientong says, is to test that type of model selection against real-world data. The analysis of car sales is an early example of that kind of testing, and the MIT researchers are currently working up a version of their conference paper for journal publication. “I’ve been waiting to see the paper,” Rusmevichientong says. “That sounds really exciting.”

7
EEE / While you’re up, print me a solar cell
« on: May 09, 2018, 01:26:17 PM »

The new technology, developed by a team of researchers at MIT, is reported in a paper in the journal Advanced Materials, published online July 8. The paper is co-authored by Karen Gleason, the Alexander and I. Michael Kasser Professor of Chemical Engineering; Professor of Electrical Engineering Vladimir Bulović; graduate student Miles Barr; and six other students and postdocs. The work was supported by the Eni-MIT Alliance Solar Frontiers Program and the National Science Foundation.

The technique represents a major departure from the systems used until now to create most solar cells, which require exposing the substrates to potentially damaging conditions, either in the form of liquids or high temperatures. The new printing process uses vapors, not liquids, and temperatures less than 120 degrees Celsius. These “gentle” conditions make it possible to use ordinary untreated paper, cloth or plastic as the substrate on which the solar cells can be printed.

It is, to be sure, a bit more complex than just printing out a term paper. In order to create an array of photovoltaic cells on the paper, five layers of material need to be deposited onto the same sheet of paper in successive passes, using a mask (also made of paper) to form the patterns of cells on the surface. And the process has to take place in a vacuum chamber.

The basic process is essentially the same as the one used to make the silvery lining in your bag of potato chips: a vapor-deposition process that can be carried out inexpensively on a vast commercial scale.

The resilient solar cells still function even when folded up into a paper airplane. In their paper, the MIT researchers also describe printing a solar cell on a sheet of PET plastic (a thinner version of the material used for soda bottles) and then folding and unfolding it 1,000 times, with no significant loss of performance. By contrast, a commercially produced solar cell on the same material failed after a single folding.

“We have demonstrated quite thoroughly the robustness of this technology,” Bulović says. In addition, because of the low weight of the paper or plastic substrate compared to conventional glass or other materials, “we think we can fabricate scalable solar cells that can reach record-high watts-per-kilogram performance. For solar cells with such properties, a number of technological applications open up,” he says. For example, in remote developing-world locations, weight makes a big difference in how many cells could be delivered in a given load

8
EEE / Computer learns language by playing games
« on: May 09, 2018, 01:25:27 PM »
Computers are great at treating words as data: Word-processing programs let you rearrange and format text however you like, and search engines can quickly find a word anywhere on the Web. But what would it mean for a computer to actually understand the meaning of a sentence written in ordinary English — or French, or Urdu, or Mandarin?

One test might be whether the computer could analyze and follow a set of instructions for an unfamiliar task. And indeed, in the last few years, researchers at MIT’s Computer Science and Artificial Intelligence Lab have begun designing machine-learning systems that do exactly that, with surprisingly good results.

In 2009, at the annual meeting of the Association for Computational Linguistics (ACL), researchers in the lab of Regina Barzilay, associate professor of computer science and electrical engineering, took the best-paper award for a system that generated scripts for installing a piece of software on a Windows computer by reviewing instructions posted on Microsoft’s help site. At this year’s ACL meeting, Barzilay, her graduate student S. R. K. Branavan and David Silver of University College London applied a similar approach to a more complicated problem: learning to play “Civilization,” a computer game in which the player guides the development of a city into an empire across centuries of human history. When the researchers augmented a machine-learning system so that it could use a player’s manual to guide the development of a game-playing strategy, its rate of victory jumped from 46 percent to 79 percent.

Starting from scratch

“Games are used as a test bed for artificial-intelligence techniques simply because of their complexity,” says Branavan, who was first author on both ACL papers. “Every action that you take in the game doesn’t have a predetermined outcome, because the game or the opponent can randomly react to what you do. So you need a technique that can handle very complex scenarios that react in potentially random ways.”

Moreover, Barzilay says, game manuals have “very open text. They don’t tell you how to win. They just give you very general advice and suggestions, and you have to figure out a lot of other things on your own.” Relative to an application like the software-installing program, Branavan explains, games are “another step closer to the real world.”

The extraordinary thing about Barzilay and Branavan’s system is that it begins with virtually no prior knowledge about the task it’s intended to perform or the language in which the instructions are written. It has a list of actions it can take, like right-clicks or left-clicks, or moving the cursor; it has access to the information displayed on-screen; and it has some way of gauging its success, like whether the software has been installed or whether it wins the game. But it doesn’t know what actions correspond to what words in the instruction set, and it doesn’t know what the objects in the game world represent.

So initially, its behavior is almost totally random. But as it takes various actions, different words appear on screen, and it can look for instances of those words in the instruction set. It can also search the surrounding text for associated words, and develop hypotheses about what actions those words correspond to. Hypotheses that consistently lead to good results are given greater credence, while those that consistently lead to bad results are discarded.

Proof of concept

In the case of software installation, the system was able to reproduce 80 percent of the steps that a human reading the same instructions would execute. In the case of the computer game, it won 79 percent of the games it played, while a version that didn't rely on the written instructions won only 46 percent. The researchers also tested a more-sophisticated machine-learning algorithm that eschewed textual input but used additional techniques to improve its performance. Even that algorithm won only 62 percent of its games.

“If you’d asked me beforehand if I thought we could do this yet, I’d have said no,” says Eugene Charniak, University Professor of Computer Science at Brown University. “You are building something where you have very little information about the domain, but you get clues from the domain itself.”

Charniak points out that when the MIT researchers presented their work at the ACL meeting, some members of the audience argued that more sophisticated machine-learning systems would have performed better than the ones to which the researchers compared their system. But, Charniak adds, “it’s not completely clear to me that that’s really relevant. Who cares? The important point is that this was able to extract useful information from the manual, and that’s what we care about.”

Most computer games as complex as “Civilization” include algorithms that allow players to play against the computer, rather than against other people; the games’ programmers have to develop the strategies for the computer to follow and write the code that executes them. Barzilay and Branavan say that, in the near term, their system could make that job much easier, automatically creating algorithms that perform better than the hand-designed ones.

But the main purpose of the project, which was supported by the National Science Foundation, was to demonstrate that computer systems that learn the meanings of words through exploratory interaction with their environments are a promising subject for further research. And indeed, Barzilay and her students have begun to adapt their meaning-inferring algorithms to work with robotic systems.

9
EEE / The too-smart-for-its-own-good grid
« on: May 09, 2018, 01:24:41 PM »
In the last few years, electrical utilities have begun equipping their customers’ homes with new meters that have Internet connections and increased computational capacity. One envisioned application of these “smart meters” is to give customers real-time information about fluctuations in the price of electricity, which might encourage them to defer some energy-intensive tasks until supply is high or demand is low. Less of the energy produced from erratic renewable sources such as wind and solar would thus be wasted, and utilities would less frequently fire up backup generators, which are not only more expensive to operate but tend to be more polluting, too.

Recent work by researchers in MIT’s Laboratory for Information and Decision Systems, however, shows that this policy could backfire. If too many people set appliances to turn on, or devices to recharge, when the price of electricity crosses the same threshold, it could cause a huge spike in demand; in the worst case, that could bring down the power grid. Fortunately, in a paper presented at the last IEEE Conference on Decision and Control, the researchers also show that some relatively simple types of price controls could prevent huge swings in demand. But that stability would come at the cost of some of the efficiencies that real-time pricing is intended to provide.

Today, customers receive monthly electrical bills that indicate the cost of electricity as a three- to six-month average. In fact, however, the price that power producers charge utilities fluctuates every five minutes or so, according to market conditions. The electrical system is thus what control theorists call an open loop: Price varies according to demand, but demand doesn’t vary according to price. Smart meters could close that loop, drastically changing the dynamics of the system.

Taking control

Research scientist Mardavij Roozbehani and professors Sanjoy Mitter and Munther Dahleh assumed that every consumer has a “utility function” describing how inconvenient it is for him or her to defer electricity usage. While that function will vary from person to person, individual utility functions can be pooled into a single collective function for an entire population. The researchers assumed that on average, consumers will seek to maximize the difference between the utility function and the cost of electricity: That is, they’ll try to get as much convenience for as little money as possible.

What they found was that if consumer response to price fluctuation is large enough to significantly alter patterns of energy use — and if it’s not, there’s no point in installing smart meters — then price variations well within the normal range can cause dangerous oscillations in demand. “For the system to work, supply and demand must match almost perfectly at each instant of time,” Roozbehani says. “The generators have what are called ramp constraints: They cannot ramp up their production arbitrarily fast, and they cannot ramp it down arbitrarily fast. If these oscillations become very wild, they’ll have a hard time keeping track of the demand. And that’s bad for everyone.”

The researchers’ model, however, also indicates that at least partially shielding consumers from the volatility of the market could tame those oscillations. For instance, Roozbehani explains, utilities could give consumers price updates every hour or so, instead of every five minutes. Or, he says, “if the prices in the wholesale market are varying very widely, I pass the consumer a price that reflects the wholesale market conditions but not to that extent. If the prices in the wholesale market just doubled, I don’t give the consumer a price that is double the previous time interval but a price that is slightly higher.” According to Roozbehani, the same theoretical framework that he and his colleagues adopt in their paper should enable the analysis and development of practical pricing models.

The trade-off

But minimizing the risks of giving consumers real-time pricing information also diminishes the benefits. “Possibly, when you need an aggressive response from the consumers — say the wind drops — you’re not going to get it,” Roozbehani says.

One way to improve that trade-off, Roozbehani explains, would be for customers to actually give utilities information about how they would respond to different prices at different times. Utilities could then tune the prices that they pass to consumers much more precisely, to maximize responsiveness to fluctuations in the market while minimizing the risk of instability. Collecting that information would be difficult, but Roozbehani’s hunch is that the benefits would outweigh the costs. He’s currently working on expanding his model so that it factors in the value of information, to see if his hunch is right.

“As far as I know, very, very few people are analyzing the dynamics of electricity markets with experience from control theory,” says Eugene Litvinov, senior director of business architecture and technology at ISO New England, the organization that oversees the operation of the electrical grid in the six New England states. “I think we should encourage these kinds of studies, because regulatory bodies and government are pushing for certain things, and they don’t realize how far they can push. For example, they want to have 30 percent wind penetration by 2020, or something like this, but that could cause serious issues for the grid. Without that kind of analysis, the operators would be very uncomfortable just jumping over the cliff.”

But, Litvinov adds, an accurate model of the dynamics of energy consumption would have to factor in consumers’ responses, not only to changing electricity prices, but also to each other’s responses. “It’s like a game,” Litvinov says. “People will have to start adopting more sophisticated strategies. That whole dynamic is itself a subject for study.” Roozbehani agrees, pointing out that he, Dahleh, Mitter, and colleagues have already published research that begins to examine exactly the questions that Litvinov raises.

10
EEE / Portable, super-high-resolution 3-D imaging
« on: May 09, 2018, 01:23:59 PM »
By combining a clever physical interface with computer-vision algorithms, researchers in MIT’s Department of Brain and Cognitive Sciences have created a simple, portable imaging system that can achieve resolutions previously possible only with large and expensive lab equipment. The device could provide manufacturers with a way to inspect products too large to fit under a microscope and could also have applications in medicine, forensics and biometrics.

The heart of the system, dubbed GelSight, is a slab of transparent, synthetic rubber, one of whose sides is coated with a paint containing tiny flecks of metal. When pressed against the surface of an object, the paint-coated side of the slab deforms. Cameras mounted on the other side of the slab photograph the results, and computer-vision algorithms analyze the images.

In a 2009 paper, Edward Adelson, the John and Dorothy Wilson Professor of Vision Science and a member of the Computer Science and Artificial Intelligence Laboratory, and Micah Kimo Johnson, who was a postdoc in Adelson’s lab at the time, reported on an earlier version of GelSight, which was sensitive enough to detect the raised ink patterns on a $20 bill. At this year’s Siggraph — the premier conference on computer graphics — Adelson and Johnson, along with graduate student Alvin Raj and postdoc Forrester Cole, are presenting a new, higher-resolution version of GelSight that can register physical features less than a micrometer in depth and about two micrometers across.

Moreover, because GelSight makes multiple measurements of the rubber's deformation, with light coming in at several different angles, it can produce 3-D models of an object, which can be manipulated on a computer screen.

Traditionally, generating micrometer-scale images has required a large, expensive piece of equipment such as a confocal microscope or a white-light interferometer, which might take minutes or even hours to produce a 3-D image. Often, such a device has to be mounted on a vibration isolation table, which might consist of a granite slab held steady by shock absorbers. But Adelson and Johnson have built a prototype sensor, about the size of a soda can, which an operator can move over the surface of an object with one hand, and which produces 3-D images almost instantly.

Adelson and Johnson are already in discussion with one major aerospace company and several manufacturers of industrial equipment, all of whom are interested in using GelSight to check the integrity of their products. The technology has also drawn the interest of experts in criminal forensics, who think that it could provide a cheap, efficient way to identify the impressions that particular guns leave on the casings of spent shells. There could also be applications in dermatology — distinguishing moles from cancerous growths — and even biometrics. The resolution provided by GelSight is much higher than is required to distinguish fingerprints, but “the fingerprinting people keep wanting to talk to us,” Adelson says, laughing.



11
EEE / Increasing fuel efficiency with a smartphone
« on: May 09, 2018, 01:22:12 PM »
In July, at the Association for Computing Machinery’s MobiSys conference, researchers from MIT and Princeton University took the best-paper award for a system that uses a network of smartphones mounted on car dashboards to collect information about traffic signals and tell drivers when slowing down could help them avoid waiting at lights. By reducing the need to idle and accelerate from a standstill, the system saves gas: In tests conducted in Cambridge, Mass., it helped drivers cut fuel consumption by 20 percent.

Cars are responsible for 28 percent of the energy consumption and 32 percent of the carbon dioxide emissions in the United States, says Emmanouil Koukoumidis, a visiting researcher at MIT who led the project. “If you can save even a small percentage of that, then you can have a large effect on the energy that the U.S. consumes,” Koukoumidis says.

The system is intended to capitalize on a growing trend, in which drivers install brackets on their dashboards so that they can use their smartphone as a GPS navigator while driving. But unlike previous in-car cellphone applications, the new system, dubbed SignalGuru, relies on images captured by the phones’ cameras. According to Koukoumidis, the computing infrastructure that underlies the system could be adapted to a wide range of applications: The camera could, for instance, capture information about prices at different gas stations, about the locations and rates of progress of city buses, or about the availability of parking spaces in urban areas, all of which could be useful to commuters.

Fixed or flexible?

Koukoumidis is a student of Li-Shiuan Peh, an associate professor in the Department of Electrical Engineering and Computer Science who came to MIT from Princeton in fall 2009. Koukoumidis came with her, and together they launched the SignalGuru project as part of the Singapore-MIT Alliance for Research and Technology’s Future Urban Mobility program. Koukoumidis’s other thesis advisor, Princeton’s Margaret Martonosi, is the third author on the MobiSys paper.

In addition to testing SignalGuru in Cambridge, where traffic lights are on fixed schedules, the researchers also tested it in Singapore, where the duration of lights varies continuously according to fluctuations in traffic flow. In Cambridge, the system was able to predict when lights would change with an error of only two-thirds of a second. In suburban Singapore, the error increased to slightly more than a second, and at one particular light in densely populated central Singapore, it went up to more than two seconds. “The good news for the U.S.,” Koukoumidis says, “is that most signals in the U.S. are dummy signals” — signals with fixed schedules. But even an accuracy of two and half seconds, Koukoumidis says, “could very well help you avoid stopping at an intersection.” Moreover, he points out, the predictions for variable signals would improve as more cars were outfitted with the system, collecting more data.

Theory into practice

In addition to designing an application that instructs drivers when to slow down, the researchers also modeled the effect of instructing them to speed up to catch lights. But "we think that this application is not a safe thing to have," Koukoumidis says. The version of the application that the researchers used in their tests graphically displays the optimal speed for avoiding a full stop at the next light, but a commercial version, Koukoumidis says, would probably use audio prompts instead.

Koukoumidis envisions that the system could also be used in conjunction with existing routing software. Rather than recommending, for instance, that a car slow to a crawl to avoid a red light, it might suggest ducking down a side street.

“SignalGuru is a great example of how mobile phones can be used to offer new transportation services, and in particular services that had traditionally been thought to require vehicle-to-vehicle communication systems,” says Marco Gruteser, an associate professor of electrical and computer engineering in the Wireless information Network Laboratory at Rutgers University. “There is a much more infrastructure-oriented approach where transmitters are built into traffic lights and receivers are built into cars, so there’s a much higher technology investment needed.”

One obstacle to commercial deployment of the system, Gruteser says, could be “finding a way to get the participation numbers required for this type of crowd-sourcing solution. There’s a lot of people who have to use the system to provide fresh sensing data.” Additional traffic-related applications, of the type that Koukoumidis is investigating, could be one way to drive participation, Gruteser says, but they won’t emerge overnight. “The processing algorithms would be a little more complex,” Gruteser says.

12
EEE / Mimicking cells with transistors
« on: May 09, 2018, 01:21:32 PM »
As the world has become less analog and more digital — as tape decks and TV antennas have given way to MP3 players and streaming video — electrical engineers’ habits of thought have changed, too. In the analog world, they used to think mostly in terms of quantities such as voltage, which are continuous, meaning they can take on an infinite range of values. Now, they tend to think more in terms of 0s and 1s, the binary oppositions of digital logic.

Since the completion of the Human Genome Project, two thriving new disciplines — synthetic biology and systems biology — have emerged from the observation that in some ways, the sequences of chemical reactions that lead to protein production in cells are a lot like electronic circuits. In general, researchers in both fields tend to analyze reactions in terms of binary oppositions: If a chemical is present, one thing happens; if the chemical is absent, a different thing happens.

But Rahul Sarpeshkar, an associate professor of electrical engineering in MIT’s Research Laboratory of Electronics (RLE), thinks that’s the wrong approach. “The signals in cells are not ones or zeroes,” Sarpeshkar says. “That’s an overly simplified abstraction that is kind of a first, crude, useful approximation for what cells do. But everybody knows that’s really wrong.”

At the Biomedical Circuits and Systems Conference in San Diego in November, Sarpeshkar, research scientist Lorenzo Turicchia, postdoc Ramiz Daniel and graduate student Sung Sik Woo, all of RLE, will present a paper in which they use analog electronic circuits to model two different types of interactions between proteins and DNA in the cell. The circuits mimic the behaviors of the cell with remarkable accuracy, but perhaps more important, they do it with far fewer transistors than a digital model would require.

The work could point the way toward electronic simulations of biological systems that not only are simpler to build and more accurate, but run much more efficiently. It also suggests a new framework for analyzing and designing the biochemical processes that govern cell behavior.

Shades of gray

A transistor is basically a switch: When it’s on, it conducts electricity; when it’s off, it doesn’t. In a computer chip, those two states represent 0s and 1s.

But in moving between its nonconductive and conductive states, a transistor passes through every state in between — slightly conductive, moderately conductive, fairly conductive — just as a car accelerating from zero to 60 passes through every speed in between. Because the transistors in a computer chip are intended to perform binary logic operations, they’re designed to make those transitional states undetectable. But it’s the transitional states that Sarpeshkar and his colleagues are trying to exploit.

“Let’s say the cell is a pancreatic cell making insulin,” Sarpeshkar says. “Well, when the glucose goes up, it wants to make more insulin. But it’s not bang-bang. If the glucose goes up more, it’ll make more insulin. If the glucose goes down a little, it’s going to make less insulin. It’s graded. It’s not a logic gate.”

Treated as an analog device, a single transistor has an infinite range of possible conductivities, so it could model an infinite range of chemical concentrations. But treated as a binary switch, a transistor has only two possible states, so modeling a large but finite range of concentrations would require a whole bank of transistors. For large circuits that model sequences of reactions within the cell, binary logic rapidly becomes unmanageably complex. But analog circuits don’t. Indeed, analog circuits exploit the same types of physical phenomena that make the cellular machinery so efficient in the first place.

“If you think about it, what is electronics?” Sarpeshkar says. “It’s the motion of electrons. What is chemistry? Chemistry is about electrons moving from one atom or molecule to another atom or molecule. They must be deeply connected: They’re both about the electrons.”

Validation

For their new paper, the RLE researchers performed their own biological experiments, measuring the effects of gradually increasing the concentrations of two different proteins within the cell. Both proteins prompt the cell to start producing other proteins, but they do it in different ways: One of them binds to a strand of DNA and causes the cell to increase production of a particular protein; the other deactivates a protein that’s suppressing a protein’s production.

Sarpeshkar and his colleagues were able to model both processes using circuits with only eight transistors each. Moreover, the circuits turned out to form mirror images of each other, representing the difference between directly activating protein production and deactivating a deactivator. And finally, the circuits modeled the interactions of the genes and proteins with remarkable accuracy.

“The concept of using a single transistor to implement an entire equation — which, implemented on a digital computer, would take several lines of code, and if you look inside the box it would be millions of transistors — is definitely an advance,” says Gert Cauwenberghs, a professor of bioengineering and biology at the University of California at San Diego. “The extreme variability in biological systems, and the fact that the systems are still resilient against variations, would suggest that analog circuits, which have some of the same physical thermodynamic principles and noise embedded in them, would be a good implementation platform.”

Cauwenberghs cautions that to be useful to biologists, an analog model of genetic circuits would have to be much more complex than the one that the RLE researchers describe in their new paper. Building such a model, he says, will require as much work by biologists, in generating accurate data about chemical concentrations in the cell, as by electrical engineers. But “there’s definitely synergy between these two domains,” he says.

13
EEE / Connecting neurons to fix the brain
« on: May 09, 2018, 01:20:11 PM »
Each of the brain’s 100 billion neurons forms thousands of connections with other neurons. These connections, known as synapses, allow cells to rapidly share information, coordinate their activities, and achieve learning and memory. Breakdowns in those connections have been linked to neurological disorders including autism and Alzheimer’s disease, as well as decline of memory during normal aging.

Many scientists believe that strengthening synaptic connections could offer a way to treat those diseases, as well as age-related decline in brain function. To that end, a team of MIT researchers has developed a new way to grow synapses between cells in a laboratory dish, under very controlled conditions that enable rapid, large-scale screens for potential new drugs.

Using their new technology, the researchers have already identified several compounds that can strengthen synapses. Such drugs could help compensate for the cognitive decline seen in Alzheimer’s, says Mehmet Fatih Yanik, the Robert J. Shillman (1974) Career Development Associate Professor of Electrical Engineering at MIT and leader of the research team. Yanik and his colleagues described the technology in the Oct. 25 online edition of the journal Nature Communications.

Lead author of the study is MIT postdoc Peng Shi. Other authors are MIT graduate students Mark Scott and Zachary Wissner-Gross; Stephen Haggarty, Balaram Ghosh and Dongpeng Wan of Harvard University; and Ralph Mazitschek of Massachusetts General Hospital, who developed and analyzed the potential drug compounds screened in the study.

At a synapse, a neuron sends signals to one or more cells by releasing chemicals called neurotransmitters, which influence the activity of the recipient cell. Scientists can induce neurons grown in a lab dish to form synapses, but this usually produces a jumble of connections that is difficult to study.

In the new setup devised by Yanik and his colleagues, presynaptic neurons (those that send messages across a synapse) are grown in individual compartments on a lab dish. The compartments have only one opening, into a tiny channel that leads to another compartment. The presynaptic neuron sends its long axon through the channel into the other compartment, where it can form synaptic connections with cells arranged in a grid. “That way we can induce synapses in very well-defined positions,” Yanik says.

Using this technique, the researchers can create hundreds of thousands of synapses on a single lab dish, then use them to test the effects of potential drug compounds. This technique can detect changes in synaptic strength with 10 times more sensitivity than existing methods.

In this study, the researchers created and tested variants of a type of molecule known as an HDAC inhibitor. HDACs are enzymes that control how tightly DNA is wound inside the cell nucleus, which determines which genes can be copied and expressed. HDAC inhibitors, which loosen DNA coils and reveal genes that had been turned off, are now being pursued as potential treatments for Alzheimer’s and other neurodegenerative diseases.

The researchers’ goal was to find HDAC inhibitors that specifically turn on genes that enhance synaptic connections. To determine which had the strongest effects, they measured the amount of a protein called synapsin found in the presynaptic neurons. Those tests yielded several HDAC inhibitors that strengthened synapses, with the best one improving synapse strength by 300 percent.

Several HDAC inhibitors had little effect on synaptic strength, demonstrating the importance of finding HDAC inhibitors specific to synaptic genes.

The new technology offers a significant improvement over existing methods for growing synapses and studying their formation, says Matthew Dalva, associate professor of neuroscience at Thomas Jefferson University, who was not part of the research team. “Right now we know so little about synapse formation, so this could open new doors,” he says.

In future studies, this system could also be used to examine the connections between specific types of neurons obtained from different regions in the brain, such as those thought to be impaired in people with autism. Yanik plans to make the technology available to other research groups interested in doing such studies.

14
EEE / Charging toward better neural implants
« on: May 09, 2018, 01:19:26 PM »
Electrical implants that shut down excessive activity in brain cells hold great potential for treating epilepsy and chronic pain. Likewise, devices that enhance neurons’ activity may help restore function to people with nerve damage.

A new technology developed at MIT and Harvard Medical School may overcome the primary drawback to this approach, known as functional electrical stimulation: When electrical current is applied, it can spread to nearby nerves, causing painful side effects.

Nerves, the long bundles of neuronal extensions that carry instructions to the muscles — as well as sensory information such as pain — communicate via extremely rapid electrical signals. By manipulating the concentration of charged ions surrounding a nerve, the researchers were able to dramatically reduce the current needed to keep an impulse going; they could also interrupt an impulse as it traveled along a nerve.

“Functional electrical stimulation, as an idea, has been around for a long time, but its implementation in our body as a prosthetic is still in its infancy,” says Jongyoon Han, associate professor of electrical engineering and computer science and biological engineering at MIT and a member of MIT’s Research Laboratory of Electronics (RLE). “There are a lot of technological reasons for that, and I think our work is helping to relieve some of those technological bottlenecks.”

Han and his colleagues described the new technology in the Oct. 23 online edition of Nature Materials. Lead author of the paper is Yong-Ak Song, a research scientist in RLE.

Treating damage with electricity

Functional electrical stimulation involves implantation into the body of an electrode that comes into contact with a nerve. When a small current passes through the electrode, it activates the nerve. This approach has been used successfully in cochlear implants, which capture sound waves and transform them into electrical signals that can stimulate the auditory nerves, allowing some deaf people to hear.

Researchers now hope to use a similar strategy to stimulate damaged nerves that signal muscles to contract. “The problem is that the electricity can go everywhere, because our bodies are conductors,” Han says.

This is of particular concern in the face, because the nerves that control facial movements and the sensory nerves that carry pain signals are so close together. Samuel Lin, an assistant professor of surgery at Harvard Medical School and an author of the paper, is a plastic surgeon who sees many patients with facial nerve damage.

Lin, Han, Song and their colleagues set out to find a way to reduce the amount of current needed to stimulate the motor nerves, which would diminish the effect on nearby nerves. To do this, they decided to make it easier to provoke nerve impulses by manipulating the concentration of ions that surround the nerves.

The key to a neuron’s ability to carry an electric current is its negatively charged interior, relative to the fluid surrounding it. It was already known that closing this gap in voltage makes neurons easier to excite, so the MIT team decided to alter the ion concentrations by coating the electrode with a thin layer of an ion-selective membrane. These membranes, commercially available, are like filters for ions: They allow certain ions, such as potassium or calcium, to pass through, but not others.

The researchers achieved their best results with a membrane that removes positively charged calcium ions from the fluid surrounding the nerves. This calcium depletion influences voltage-gated ion channels and reduces membrane resistance, making it easier to activate the neuron when an electric current is applied. Using this technology, the researchers were able to reduce the amount of electrical current needed by about 70 percent, from 7.4 to 2.2 microamperes.

They were also able to stop electrical impulses from traveling along a nerve, using about half of the current previously needed. This could have important applications in shutting off the haywire electrical activity characteristic of epilepsy, and in relieving chronic pain.

This technology could make existing functional electrical stimulation devices more efficient, says Peter Kjäll, project leader of the organic bioelectronics group at the Swedish Medical Nanoscience Center. Though more work is needed to make the implants suitable for human tests, “I’m sure that this technology will form the basis of many future neuroprosthetic devices,” says Kjäll, who was not part of the research team.

15
EEE / Important step toward computing with light
« on: May 09, 2018, 01:15:34 PM »
There has been enormous progress in recent years toward the development of photonic chips — devices that use light beams instead of electrons to carry out their computational tasks. Now, researchers at MIT have filled in a crucial piece of the puzzle that could enable the creation of photonic chips on the standard silicon material that forms the basis for most of today’s electronics.

In many of today’s communication systems, data travels via light beams transmitted through optical fibers. Once the optical signal arrives at its destination, it is converted to electronic form, processed through electronic circuits and then converted back to light using a laser. The new device could eliminate those extra electronic-conversion steps, allowing the light signal to be processed directly.

The new component is a “diode for light,” says Caroline Ross, the Toyota Professor of Materials Science and Engineering at MIT, who is co-author of a paper reporting the new device that was published online Nov. 13 in the journal Nature Photonics. It is analogous to an electronic diode, a device that allows an electric current to flow in one direction but blocks it from going the other way; in this case, it creates a one-way street for light, rather than electricity.

This is essential, Ross explains, because without such a device stray reflections could destabilize the lasers used to produce the optical signals and reduce the efficiency of the transmission. Currently, a discrete device called an isolator is used to perform this function, but the new system would allow this function to be part of the same chip that carries out other signal-processing tasks.

To develop the device, the researchers had to find a material that is both transparent and magnetic — two characteristics that rarely occur together. They ended up using a form of a material called garnet, which is normally difficult to grow on the silicon wafers used for microchips. Garnet is desirable because it inherently transmits light differently in one direction than in another: It has a different index of refraction — the bending of light as it enters the material — depending on the direction of the beam.

The researchers were able to deposit a thin film of garnet to cover one half of a loop connected to a light-transmitting channel on the chip. The result was that light traveling through the chip in one direction passes freely, while a beam going the other way gets diverted into the loop.

The whole system could be made using standard microchip manufacturing machinery, Ross says. “It simplifies making an all-optical chip,” she says. The design of the circuit can be produced “just like an integrated-circuit person can design a whole microprocessor. Now, you can do an integrated optical circuit.”

That could make it much easier to commercialize than a system based on different materials, Ross says. “A silicon platform is what you want to use,” she says, because “there’s a huge infrastructure for silicon processing. Everyone knows how to process silicon. That means they can set about developing the chip without having to worry about new fabrication techniques.”

This technology could greatly boost the speed of data-transmission systems, for two reasons: First, light travels much faster than electrons. Second, while wires can only carry a single electronic data stream, optical computing enables multiple beams of light, carrying separate streams of data, to pass through a single optical fiber or circuit without interference. “This may be the next generation in terms of speed” for communications systems, Ross says.

Ross’ colleagues in the research included Lionel Kimerling, the Thomas Lord Professor of Materials Science and Engineering, and former students Lei Bi ’11 and Juejun Hu PhD ’09. The work was funded by the National Science Foundation and an Intel fellowship for Bi.

“This is a big advance in optical communications,” says Bethanie Stadler, a professor of electrical and computer engineering at the University of Minnesota, who was not involved in this research. The work is “significant,” she says, “as the first device with garnet integrated onto [silicon] devices.”

Pages: [1] 2 3 ... 6