Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - Rohan Sarker

Pages: 1 [2] 3 4 ... 6
16
EEE / Electronics takes on a new spin
« on: May 09, 2018, 01:14:43 PM »
Exotic materials called topological insulators, discovered just a few years ago, have yielded some of their secrets to a team of MIT researchers. For the first time, the team showed that light can be used to obtain information about the spin of electrons flowing over the material’s surface, and has even found a way to control these electron movements by varying the polarization of a light source.

The materials could open up possibilities for a new kind of devices based on spintronics, which makes use of a characteristic of electrons called spin, instead of using their electrical charge the way electronic devices do. It could also allow for much faster control of existing technologies such as magnetic data storage.

Topological insulators are materials that possess paradoxical properties. The three-dimensional bulk of the material behaves just like a conventional insulator (such as quartz or glass), which blocks the movement of electric currents. Yet the material’s outer surface behaves as an extremely good conductor, allowing electricity to flow freely.

The key to understanding the properties of any solid material is to analyze the behavior of electrons within the material — in particular determining what combinations of energy, momentum and spin are possible for these electrons, explains MIT assistant professor of physics Nuh Gedik, senior author of two recent papers describing the new findings. This set of combinations is what determines a material’s key properties — such as whether it is a metal or not, or whether it is transparent or opaque. “It’s very important, but it’s very challenging to measure,” Gedik says.

The traditional way of measuring this is to shine a light on a chunk of the solid material: The light knocks electrons out of the solid, and their energy, momentum and spin can be measured once they are ejected. The challenge, Gedik says, is that such measurements just give you data for one particular point. In order to fill in additional points on this landscape, the traditional approach is to rotate the material slightly, take another reading, then rotate it again, and so on — a very slow process.

Gedik and his team, including graduate students Yihua Wang and James McIver, and MIT Pappalardo postdoctoral fellow David Hsieh, instead devised a method that can provide a detailed three-dimensional mapping of the electron energy, momentum and spin states all at once. They did this by using short, intense pulses of circularly polarized laser light whose time of travel can be precisely measured.

By using this new technique, the MIT researchers were able to image how the spin and motion are related, for electrons travelling in all different directions and with
different momenta, all in a fraction of the time it would take using alternative methods, Wang says. This method was described in a paper by Gedik and his team that appeared Nov. 11 in the journal Physical Review Letters.

In addition to demonstrating this novel method and showing its effectiveness, Gedik says, “we learned something that was not expected.” They found that instead of the spin being precisely aligned perpendicular to the direction of the electrons’ motion, when the electrons moved with higher energies there was an unexpected tilt, a sort of warping of the expected alignment. Understanding that distortion “will be important when these materials are used in new technologies,” Gedik says.

The team’s high-speed method of measuring electron motion and spin is not limited to studying topological insulators, but could also have applications for studying materials such as magnets and superconductors, the researchers say.

One unusual characteristic of the way electrons flow across the surface of these materials is that unlike in ordinary metal conductors, impurities in the material have very little effect on the overall electrical conductivity. In most metals, impurities quickly degrade the conductivity and thus hinder the flow of electricity. This relative imperviousness to impurities could make topological insulators an important new material for some electronic applications, though the materials are so new that the most important applications may not yet be foreseen. One possibility is that they could be used for transmission of electrical current in situations where ordinary metals would heat up too much (because of the blocking effect of impurities), damaging the materials.

In a second paper, appearing today in the journal Nature Nanotechnology, Gedik and his team show that a method similar to the one they used to map the electron states can also be used to control the flow of electrons across the surface of these materials. That works because the electrons always spin in a direction nearly perpendicular to their direction of travel, but only electrons spinning in a particular direction are affected by a given circularly polarized laser beam. Thus, that beam can be used to push aside all of the electrons flowing in one direction, leaving a usable electric current flowing the other way.

“This has very immediate device possibilities,” Gedik says, because it allows the flow of current to be controlled completely by a laser beam, with no direct electronic interaction. One possible application would be in a new kind of electromagnetic storage, such as that used in computer hard drives, which now use an electric current to “flip” each storage bit from a 0 to a 1 or vice versa. Being able to control the bits with light could offer a much quicker response time, the team says.

This harnessing of electron behavior could also be a key enabling technology that could lead to the creation of spintronic circuits, using the spin of the electrons to carry information instead of their electric charge. Among other things, such devices could be an important part of creating new quantum computing systems, which many researchers think could have significant advantages over ordinary computers for solving certain kinds of highly complex problems.

Professor of physics Zhi-Xun Shen of Stanford University, who was not involved in this work, says the MIT team has confirmed the theorized structure of the topological surface by using their novel experimental method. In addition to this confirmation, he says, their second paper “is to date one of the most direct experimental evidences for optical coupling” between the laser and the surface currents, and thus “has interesting potential for opto-spintronics.”

17
EEE / 3-D cameras for cellphones
« on: May 09, 2018, 01:13:57 PM »
When Microsoft’s Kinect — a device that lets Xbox users control games with physical gestures — hit the market, computer scientists immediately began hacking it. A black plastic bar about 11 inches wide with an infrared rangefinder and a camera built in, the Kinect produces a visual map of the scene before it, with information about the distance to individual objects. At MIT alone, researchers have used the Kinect to create a “Minority Report”-style computer interface, a navigation system for miniature robotic helicopters and a holographic-video transmitter, among other things.

Now imagine a device that provides more-accurate depth information than the Kinect, has a greater range and works under all lighting conditions — but is so small, cheap and power-efficient that it could be incorporated into a cellphone at very little extra cost. That’s the promise of recent work by Vivek Goyal, the Esther and Harold E. Edgerton Associate Professor of Electrical Engineering, and his group at MIT’s Research Lab of Electronics.

“3-D acquisition has become a really hot topic,” Goyal says. “In consumer electronics, people are very interested in 3-D for immersive communication, but then they’re also interested in 3-D for human-computer interaction.”

Andrea Colaco, a graduate student at MIT’s Media Lab and one of Goyal’s co-authors on a paper that will be presented at the IEEE’s International Conference on Acoustics, Speech, and Signal Processing in March, points out that gestural interfaces make it much easier for multiple people to interact with a computer at once — as in the dance games the Kinect has popularized.

“When you’re talking about a single person and a machine, we’ve sort of optimized the way we do it,” Colaco says. “But when it’s a group, there’s less flexibility.”

Ahmed Kirmani, a graduate student in the Department of Electrical Engineering and Computer Science and another of the paper’s authors, adds, “3-D displays are way ahead in terms of technology as compared to 3-D cameras. You have these very high-resolution 3-D displays that are available that run at real-time frame rates.

“Sensing is always hard,” he says, “and rendering it is easy.”

Clocking in

Like other sophisticated depth-sensing devices, the MIT researchers’ system uses the “time of flight” of light particles to gauge depth: A pulse of infrared laser light is fired at a scene, and the camera measures the time it takes the light to return from objects at different distances.

Traditional time-of-flight systems use one of two approaches to build up a “depth map” of a scene. LIDAR (for light detection and ranging) uses a scanning laser beam that fires a series of pulses, each corresponding to a point in a grid, and separately measures their time of return. But that makes data acquisition slower, and it requires a mechanical system to continually redirect the laser. The alternative, employed by so-called time-of-flight cameras, is to illuminate the whole scene with laser pulses and use a bank of sensors to register the returned light. But sensors able to distinguish small groups of light particles — photons — are expensive: A typical time-of-flight camera costs thousands of dollars.

The MIT researchers’ system, by contrast, uses only a single light detector — a one-pixel camera. But by using some clever mathematical tricks, it can get away with firing the laser a limited number of times.

The first trick is a common one in the field of compressed sensing: The light emitted by the laser passes through a series of randomly generated patterns of light and dark squares, like irregular checkerboards. Remarkably, this provides enough information that algorithms can reconstruct a two-dimensional visual image from the light intensities measured by a single pixel.

In experiments, the researchers found that the number of laser flashes — and, roughly, the number of checkerboard patterns — that they needed to build an adequate depth map was about 5 percent of the number of pixels in the final image. A LIDAR system, by contrast, would need to send out a separate laser pulse for every pixel.

To add the crucial third dimension to the depth map, the researchers use another technique, called parametric signal processing. Essentially, they assume that all of the surfaces in the scene, however they’re oriented toward the camera, are flat planes. Although that’s not strictly true, the mathematics of light bouncing off flat planes is much simpler than that of light bouncing off curved surfaces. The researchers’ parametric algorithm fits the information about returning light to the flat-plane model that best fits it, creating a very accurate depth map from a minimum of visual information.

On the cheap

Indeed, the algorithm lets the researchers get away with relatively crude hardware. Their system measures the time of flight of photons using a cheap photodetector and an ordinary analog-to-digital converter — an off-the-shelf component already found in all cellphones. The sensor takes about 0.7 nanoseconds to register a change to its input.

That’s enough time for light to travel 21 centimeters, Goyal says. “So for an interval of depth of 10 and a half centimeters — I’m dividing by two because light has to go back and forth — all the information is getting blurred together,” he says. Because of the parametric algorithm, however, the researchers’ system can distinguish objects that are only two millimeters apart in depth. “It doesn’t look like you could possibly get so much information out of this signal when it’s blurred together,” Goyal says.

The researchers’ algorithm is also simple enough to run on the type of processor ordinarily found in a smartphone. To interpret the data provided by the Kinect, by contrast, the Xbox requires the extra processing power of a graphics-processing unit, or GPU, a powerful special-purpose piece of hardware.

“This is a brand-new way of acquiring depth information,” says Yue M. Lu, an assistant professor of electrical engineering at Harvard University. “It’s a very clever way of getting this information.” One obstacle to deployment of the system in a handheld device, Lu speculates, could be the difficulty of emitting light pulses of adequate intensity without draining the battery.

But the light intensity required to get accurate depth readings is proportional to the distance of the objects in the scene, Goyal explains, and the applications most likely to be useful on a portable device — such as gestural interfaces — deal with nearby objects. Moreover, he explains, the researchers’ system makes an initial estimate of objects’ distance and adjusts the intensity of subsequent light pulses accordingly.

18
EEE / The faster-than-fast Fourier transform
« on: May 09, 2018, 01:13:10 PM »
The Fourier transform is one of the most fundamental concepts in the information sciences. It’s a method for representing an irregular signal — such as the voltage fluctuations in the wire that connects an MP3 player to a loudspeaker — as a combination of pure frequencies. It’s universal in signal processing, but it can also be used to compress image and audio files, solve differential equations and price stock options, among other things.

The reason the Fourier transform is so prevalent is an algorithm called the fast Fourier transform (FFT), devised in the mid-1960s, which made it practical to calculate Fourier transforms on the fly. Ever since the FFT was proposed, however, people have wondered whether an even faster algorithm could be found.


At the Symposium on Discrete Algorithms (SODA) this week, a group of MIT researchers will present a new algorithm that, in a large range of practically important cases, improves on the fast Fourier transform. Under some circumstances, the improvement can be dramatic — a tenfold increase in speed. The new algorithm could be particularly useful for image compression, enabling, say, smartphones to wirelessly transmit large video files without draining their batteries or consuming their monthly bandwidth allotments.

Like the FFT, the new algorithm works on digital signals. A digital signal is just a series of numbers — discrete samples of an analog signal, such as the sound of a musical instrument. The FFT takes a digital signal containing a certain number of samples and expresses it as the weighted sum of an equivalent number of frequencies.

“Weighted” means that some of those frequencies count more toward the total than others. Indeed, many of the frequencies may have such low weights that they can be safely disregarded. That’s why the Fourier transform is useful for compression. An eight-by-eight block of pixels can be thought of as a 64-sample signal, and thus as the sum of 64 different frequencies. But as the researchers point out in their new paper, empirical studies show that on average, 57 of those frequencies can be discarded with minimal loss of image quality.

Heavyweight division

Signals whose Fourier transforms include a relatively small number of heavily weighted frequencies are called “sparse.” The new algorithm determines the weights of a signal’s most heavily weighted frequencies; the sparser the signal, the greater the speedup the algorithm provides. Indeed, if the signal is sparse enough, the algorithm can simply sample it randomly rather than reading it in its entirety.

“In nature, most of the normal signals are sparse,” says Dina Katabi, one of the developers of the new algorithm. Consider, for instance, a recording of a piece of chamber music: The composite signal consists of only a few instruments each playing only one note at a time. A recording, on the other hand, of all possible instruments each playing all possible notes at once wouldn’t be sparse — but neither would it be a signal that anyone cares about.

The new algorithm — which associate professor Katabi and professor Piotr Indyk, both of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), developed together with their students Eric Price and Haitham Hassanieh — relies on two key ideas. The first is to divide a signal into narrower slices of bandwidth, sized so that a slice will generally contain only one frequency with a heavy weight.

In signal processing, the basic tool for isolating particular frequencies is a filter. But filters tend to have blurry boundaries: One range of frequencies will pass through the filter more or less intact; frequencies just outside that range will be somewhat attenuated; frequencies outside that range will be attenuated still more; and so on, until you reach the frequencies that are filtered out almost perfectly.

If it so happens that the one frequency with a heavy weight is at the edge of the filter, however, it could end up so attenuated that it can’t be identified. So the researchers’ first contribution was to find a computationally efficient way to combine filters so that they overlap, ensuring that no frequencies inside the target range will be unduly attenuated, but that the boundaries between slices of spectrum are still fairly sharp.

Zeroing in

Once they’ve isolated a slice of spectrum, however, the researchers still have to identify the most heavily weighted frequency in that slice. In the SODA paper, they do this by repeatedly cutting the slice of spectrum into smaller pieces and keeping only those in which most of the signal power is concentrated. But in an as-yet-unpublished paper, they describe a much more efficient technique, which borrows a signal-processing strategy from 4G cellular networks. Frequencies are generally represented as up-and-down squiggles, but they can also be though of as oscillations; by sampling the same slice of bandwidth at different times, the researchers can determine where the dominant frequency is in its oscillatory cycle.

Two University of Michigan researchers — Anna Gilbert, a professor of mathematics, and Martin Strauss, an associate professor of mathematics and of electrical engineering and computer science — had previously proposed an algorithm that improved on the FFT for very sparse signals. “Some of the previous work, including my own with Anna Gilbert and so on, would improve upon the fast Fourier transform algorithm, but only if the sparsity k” — the number of heavily weighted frequencies — “was considerably smaller than the input size n,” Strauss says. The MIT researchers’ algorithm, however, “greatly expands the number of circumstances where one can beat the traditional FFT,” Strauss says. “Even if that number k is starting to get close to n — to all of them being important — this algorithm still gives some improvement over FFT.”

19
EEE / Minimizing background noise in stem cell culture
« on: May 09, 2018, 01:12:06 PM »
Cells grown in culture are not alone: They are constantly communicating with one another by sending signals through their culture media that are picked up and transmitted by other cells in the media. When thousands of cells are cultured together in a dish, there are hundreds of thousands of these signals present every minute, all competing to be heard.

Scientists trying to direct cells to do useful things — like causing stem cells to turn into neurons or heart cells — typically try to overcome these signals by adding their own exogenous factors. These exogenous factors are often added at saturating concentrations, blanketing the cells with a particular growth factor or cytokine to activate specific pathways to produce a desired outcome, such as controlling stem cell differentiation. However, the constant din of cell communications is still present, causing alternate and perhaps opposing pathways to be stimulated.

This unstoppable secretion by cells in culture makes it difficult to determine the exact “recipe” of exogenous factors needed to elicit a specific phenotype, particularly in fast-growing cells like embryonic stem cells. MIT researchers Laralynne Przybyla, a graduate student in biology, and Joel Voldman, associate professor of electrical engineering and computer science, report in a paper published this week in Proceedings of the National Academy of Sciences how they were able to silence this din by using a microfluidic device to culture embryonic stem cells under continuous liquid flow (known as perfusion) such that factors secreted by the cells were removed before they could be transmitted to other cells. They used this device to investigate the influence of these factors on stem cells.

Mouse embryonic stem cells (mESCs) can be maintained as stem cells indefinitely in culture, a characteristic known as self-renewal. One recipe used by scientists to maintain mESC self-renewal involves addition of the signaling proteins known as leukemia inhibitory factor (LIF) and bone morphogenetic protein 4 (BMP4). It has been previously thought that LIF and BMP4 together were sufficient to maintain mESC self-renewal, an assumption that was difficult to challenge given the problem of cell-secreted background noise.

However, using their microfluidic perfusion device, Przybyla and Voldman found that mESCs actually exit their self-renewing state when the culture liquid is perfused in the presence of LIF and BMP4. “This shows that these two factors are not in fact sufficient in isolation, and that a previously unrecognized cell-secreted signal is required to maintain embryonic stem cells as such,” notes Przybyla, who was first author on the study.

She and Voldman additionally found that the cells entered a more primed stem cell state corresponding to a more advanced embryonic developmental stage, and that proteins that normally remodel the fibrous scaffolding that cells secrete, known as the extracellular matrix, were partially responsible for the change.

“By showing that microfluidic perfusion can be used to study not only the necessity but also the sufficiency of specific factors to elicit a desired phenotype, this work has the potential for broader applications such as determination of cancer cell-secreted signals that are required for tumor growth or metastasis,” Voldman says. The results also directly inform stem cell biology, because understanding self-renewal is essential as stem cell research advances toward clinical and biotechnological applications.

20
EEE / The blind codemaker
« on: May 09, 2018, 01:11:30 PM »
Error-correcting codes are one of the triumphs of the digital age. They’re a way of encoding information so that it can be transmitted across a communication channel — such as an optical fiber or a wireless connection — with perfect fidelity, even in the presence of the corrupting influences known as “noise.”

An encoded message is called a codeword; the noisier the channel, the longer the codeword has to be to ensure perfect communication. But the longer the codeword, the longer it takes to transmit the message. So the ideal of maximally efficient, perfectly faithful communication requires precisely matching codeword length to the level of noise in the channel.

Wireless devices, such as cellphones or Wi-Fi transmitters, regularly send out test messages to gauge noise levels, so they can adjust their codes accordingly. But as anyone who’s used a cellphone knows, reception quality can vary at locations just a few feet apart — or even at a single location. Noise measurements can rapidly become outdated, and wireless devices routinely end up using codewords that are too long, squandering bandwidth, or too short, making accurate decoding impossible.

In the next issue of the journal IEEE Transactions on Information Theory, Gregory Wornell, a professor in the Department of Electrical Engineering and Computer Science at MIT, Uri Erez at Tel Aviv University in Israel and Mitchell Trott at Google describe a new coding scheme that guarantees the fastest possible delivery of data over fluctuating wireless connections without requiring prior knowledge of noise levels. The researchers also received a U.S. patent for the technique in September.

Say ‘when’

The scheme works by creating one long codeword for each message, but successively longer chunks of the codeword are themselves good codewords. “The transmission strategy is that we send the first part of the codeword,” Wornell explains. “If it doesn’t succeed, we send the second part, and so on. We don’t repeat transmissions: We always send the next part rather than resending the same part again. Because when you marry the first part, which was too noisy to decode, with the second and any subsequent parts, they together constitute a new, good encoding of the message for a higher level of noise.”

Say, for instance, that the long codeword — call it the master codeword — consists of 30,000 symbols. The first 10,000 symbols might be the ideal encoding if there’s a minimum level of noise in the channel. But if there’s more noise, the receiver might need the next 5,000 symbols as well, or the next 7,374. If there’s a lot of noise, the receiver might require almost all of the 30,000 symbols. But once it has received enough symbols to decode the underlying message, it signals the sender to stop. In the paper, the researchers prove mathematically that at that point, the length of the received codeword is the shortest possible length given the channel’s noise properties — even if they’ve been fluctuating.

To produce their master codeword, the researchers first split the message to be sent into several — for example, three — fragments of equal length. They encode each of those fragments using existing error-correcting codes, such as Gallager codes, a very efficient class of codes common in wireless communication. Then they multiply each of the resulting codewords by a different number and add the results together. That produces the first chunk of the master codeword. Then they multiply the codewords by a different set of numbers and add those results, producing the second chunk of the master codeword, and so on.

Tailor-made

In order to decode a message, the receiver needs to know the numbers by which the codewords were multiplied. Those numbers — along with the number of fragments into which the initial message is divided and the size of the chunks of the master codeword — depend on the expected variability of the communications channel. Wornell surmises, however, that a few standard configurations will suffice for most wireless applications.

The only chunk of the master codeword that must be transmitted in its entirety is the first. Thereafter, the receiver could complete the decoding with only partial chunks. So the size of the initial chunk is calibrated to the highest possible channel quality that can be expected for a particular application.

Finally, the complexity of the decoding process depends on the number of fragments into which the initial message is divided. If that number is three, which Wornell considers a good bet for most wireless links, the decoder has to decode three messages instead of one for every chunk it receives, so it will perform three times as many computations as it would with a conventional code. “In the world of digital communication, however,” Wornell says, “a fixed factor of three is not a big deal, given Moore’s Law on the growth of computation power.”

H. Vincent Poor, the Michael Henry Strater University Professor of Electrical Engineering and dean of the School of Engineering and Applied Science at Princeton University, sees few obstacles to the commercial deployment of a coding scheme such as the one developed by Wornell and his colleagues. “The codes are inherently practical,” Poor says. “In fact, the paper not only develops the theory and analysis of such codes but also provides specific examples of practical constructions.”

Because the codes “enable efficient communication over unpredictable channels,” he adds, “they have an important role to play in future wireless-communication applications and standards for connecting mobile devices.”

21
EEE / Microchips’ optical future
« on: May 09, 2018, 01:10:48 PM »
As the United States seeks to reinvigorate its job market and move past economic recession, MIT News examines manufacturing’s role in the country's economic future through this series on work at the Institute around manufacturing.

Computer chips are one area where the United States still enjoys a significant manufacturing lead over the rest of the world. In 2011, five of the top 10 chipmakers by revenue were U.S. companies, and Intel, the largest of them by a wide margin, has seven manufacturing facilities in the United States, versus only three overseas.

The most recent of those to open, however, is in China, and while that may have been a strategic rather than economic decision — an attempt to gain leverage in the Chinese computer market — both the Chinese and Indian governments have invested heavily in their countries’ chip-making capacities. In order to maintain its manufacturing edge, the United States will need to continue developing new technologies at a torrid pace. And one of those new technologies will almost certainly be an integrated optoelectronic chip — a chip that uses light rather than electricity to move data.

As chips’ computational power increases, they need higher-bandwidth connections — whether between servers in a server farm, between a chip and main memory, or between the individual cores on a single chip. But with electrical connections, increasing bandwidth means increasing power. A 2006 study by Japan's Ministry of Economy, Trade and Industry predicted that by 2025, information technology in Japan alone would consume nearly 250 billion kilowatt-hours' worth of electricity per year, or roughly what the entire country of Australia consumes today.

Optoelectronic chips could drastically reduce future computers’ power consumption. But to produce the optoelectronic chips used today in telecommunications networks, chipmakers manufacture optical devices — such as lasers, photodetectors and modulators — separately and then attach them to silicon chips. That approach wouldn’t work with conventional microprocessors, which require a much denser concentration of higher-performance components.

The most intuitive way to add optics to a microprocessor’s electronics would be to build both directly on the same piece of silicon, a technique known as monolithic integration.

In a 2010 paper in the journal Management Science, Erica Fuchs, an assistant professor of engineering and public policy at Carnegie Mellon University, who got her PhD in 2006 from MIT’s Engineering Systems Division, and MIT’s Randolph Kirchain, a principal research scientist at the Materials Systems Laboratory, found that monolithically integrated chips were actually cheaper to produce in the United States than in low-wage countries.

“The designers and the engineers with the capabilities to produce those technologies didn’t want to move to developing East Asia,” Fuchs says. “Those engineers are in the U.S., and that’s where you would need to manufacture.”

During the telecom boom of the late 1990s, Fuchs says, telecommunications companies investigated the possibility of producing monolithically integrated communications chips. But when the bubble burst, they fell back on the less technically demanding process of piecemeal assembly, which was practical overseas. That yielded chips that were cheaper but also much larger.

While large chips are fine in telecommunications systems, they’re not an option in laptops or cellphones. The materials used in today’s optical devices, however, are incompatible with the processes currently used to produce microprocessors, making monolithic integration a stiff challenge.

Making the case

According to Vladimir Stojanovic, an associate professor of electrical engineering, microprocessor manufacturers are all the more reluctant to pursue monolithic integration because they’ve pushed up against the physical limits of the transistor design that has remained more or less consistent for more than 50 years. “It never was the case that from one generation to another, you’d be completely redesigning the device,” Stojanovic says. U.S. chip manufacturers are so concerned with keeping up with Moore’s Law — the doubling of the number of transistors on a chip roughly every 18 months — that integrating optics is on the back burner. “You’re trying to push really hard on the transistor,” Stojanovic says, “and then somebody else is telling you, ‘Oh, but you need to worry about all these extra constraints on photonics if you integrate in the same front end.’”

To try to get U.S. chip manufacturers to pay more attention to optics, Stojanovic and professor of electrical engineering Rajeev Ram have been leading an effort to develop techniques for monolithically integrating optical components into computer chips without disrupting existing manufacturing processes. They’ve gotten very close: Using IBM’s chip-fabrication facilities, they’ve produced chips with photodetectors, ring resonators (which filter out particular wavelengths of light) and waveguides (which conduct light across the chip), all of which are controlled by on-chip circuitry. The one production step that can’t be performed in the fabrication facility is etching a channel under the waveguides, to prevent light from leaking out of them.

But Stojanovic acknowledges that optimizing the performance of these optical components would probably require some modification to existing processes. In that respect, the uncertainty of the future of transistor design may actually offer an opportunity. It could be easier to add optical components to a chip being designed from the ground up than to one whose design is fixed. “That’s the moment it has to come in,” Stojanovic says, “at the moment where everything’s in flux, and soft.”

Loyal opposition

Another of Stojanovic and Ram’s collaborators on the monolithic-integration project is Michael Watts, who received his PhD from MIT in 2005 and returned in 2010 as an associate professor of electrical engineering after a stint at Sandia National Labs. Stojanovic and Watts are also collaborating with researchers at MIT’s Lincoln Laboratory on a different project — with Watts as the primary investigator — in which optical and electrical components are built on different wafers of silicon, which are then fused together to produce a hybrid wafer.

This approach falls somewhere between monolithic integration and the piecemeal-assembly technique used today. Because it involves several additional processing steps, it could prove more expensive than fully realized monolithic integration — but in the near term, it could also prove more practical, because it allows the performance of the optics and electronics to be optimized separately. As for why the researchers would collaborate on two projects that in some sense compete with each other, Watts says, “Sometimes the best policy is: When you come to a fork in the road, take it.”

And indeed, Ram thinks that the two approaches may not compete with each other at all. Even if optical and electrical components were built on separate chips and fused together, Ram explains, the optical chip would still require some electronics. “You will likely have the electronics that do the detection on the same chip as the photodetector,” Ram says. “Same for the electronics that drive the modulator. Either way, you have to figure out how to integrate photonics in the plane with the transistor.”

Getting on board

With both approaches, however — monolithic integration or chip stacking — the laser that provides the data-carrying beam of light would be off-chip. Off-chip lasers may well be a feature of the first optoelectronic chips to appear in computers, which might be used chiefly to relay data between servers, or between a processor and memory. But chips that use optics to communicate between cores will probably require on-chip lasers.

The lasers used in telecommunications networks, however, are made from exotic semiconductors that make them even more difficult to integrate into existing manufacturing processes than photodetectors or waveguides. In 2010, Lionel Kimerling, the Thomas Lord Professor of Materials Science and Engineering, and his group demonstrated the first laser built from germanium that can produce wavelengths of light useful for optical communication.

Since many chip manufacturers already use germanium to increase their chips’ speed, this could make on-chip lasers much easier to build. And that, in turn, would reduce chips’ power consumption even further. “The on-chip laser lets you do all the energy-efficiency tricks that you can do with electronics,” Kimerling says. “You can turn it off when you’re not using it. You can reduce its power when it doesn’t have to go that far.”

But the real advantage of on-chip lasers, Kimerling says, would be realized in microprocessors with hundreds of cores, such as the one currently being designed in a major project led by Anant Agarwal, director of MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). “Right now, we have a vision that you need one laser for one core, and it can communicate with all the other cores,” Kimerling says. “That allows you to trade data among the cores without going to [memory], and that’s a major energy saving.”

22
MIT OpenCourseWare has released a new version of Introduction to Electrical Engineering and Computer Science I in the innovative OCW Scholar format designed for independent learners. Coordinated by Professor Dennis Freeman, 6.01SC includes contributions from a half dozen MIT electrical engineering and computer science (EECS) faculty members, and features lecture and recitation videos.

6.01 is one of two introductory courses required of all EECS majors. It was developed as part of a new EECS curriculum introduced in 2005, and is designed to introduce students to electrical engineering and computer science through both theory and practice. 6.01 also provides prerequisite information supporting study in the new 6.002x course being offered this spring through MITx, the online-learning initiative MIT announced in December 2011.

OCW Scholar courses represent a new approach to OCW publication. MIT faculty, staff and students work closely with the OCW team to structure the course materials for independent learners. These courses offer more materials than typical OCW courses and include new custom-created content. 6.01SC provides a complete learning experience for independent learners, including lecture videos, recitation videos, course notes, software and design labs, homework assignments and additional exercises, and quizzes and exams.

The first five of a planned 15 OCW Scholar courses were launched by MIT OpenCourseWare in January 2011, and have collectively received more than 800,000 visits in less than a year. The initial OCW Scholar courses included Classical Mechanics, Electricity and Magnetism, Solid State Chemistry, Single Variable Calculus, and Multivariable Calculus.

Linear Algebra, Differential Equations, and Principles of Microeconomics were published earlier this year, and Introduction to Electrical Engineering and Computer Science is the fourth of seven OCW Scholar courses that will be published in 2012. Upcoming OCW Scholar courses are Introduction to Psychology, Fundamentals of Biology and Introduction to Computer Science and Programming. OCW Scholar courses are published on the OCW site with the support of the Stanton Foundation.

23
EEE / Measuring blood flow to monitor sickle cell disease
« on: May 09, 2018, 01:09:18 PM »
More than 60 years ago, scientists discovered the underlying cause of sickle cell disease: People with the disorder produce crescent-shaped red blood cells that clog capillaries instead of flowing smoothly, like ordinary, disc-shaped red blood cells do. This can cause severe pain, major organ damage and a significantly shortened lifespan.

Researchers later found that the disease results from a single mutation in the hemoglobin protein, and realized that the sickle shape — seen more often in people from tropical climates — is actually an evolutionary adaptation that can help protect against malaria.

However, despite everything scientists have learned about the disease, which affects 13 million people worldwide, there are few treatments available. “We still don’t have effective enough therapies and we don’t have a good feel for how the disease manifests itself differently in different people,” says Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and Electrical Engineering and Computer Science at MIT.

Bhatia, MIT postdoc David Wood, and colleagues at Harvard University, Massachusetts General Hospital (MGH) and Brigham and Women’s Hospital have now devised a simple blood test that can predict whether sickle cell patients are at high risk for painful complications of the disease. To perform the test, the researchers measure how well blood samples flow through a microfluidic device.

The device, described March 1 in the journal Science Translational Medicine, could help doctors monitor sickle cell patients and determine the best course of treatment, Bhatia says. It could also aid researchers in developing new drugs for the disease.

Monitoring blood flow

Sickle cell patients often suffer from anemia because their abnormal red blood cells don’t last very long in circulation. However, most of the symptoms associated with the disease are caused by vaso-occlusive crises that occur when the sickle-shaped cells, which are stiffer and stickier than normal blood cells, clog blood vessels and block blood flow. The frequency and severity of these crises vary widely between patients, and there is no way to predict when they will occur.

“When a patient has high cholesterol, you can monitor their risk for heart disease and response to therapy with a blood test. With sickle cell disease, despite patients having the same underlying genetic change, some suffer tremendously while others don’t — and we still don’t have a test that can guide physicians in making therapeutic decisions,” Bhatia says.

In 2007, Bhatia and L. Mahadevan, a Harvard professor of applied mathematics who studies natural and biological phenomena, started working together to understand how sickle cells move through capillaries. In the current study, the researchers recreated the conditions that can produce a vaso-occlusive crisis: They directed blood through a microchannel and lowered its oxygen concentration, which triggers sickle cells to jam and block blood flow.

For each blood sample, they measured how quickly it would stop flowing after being deoxygenized. John Higgins of MGH and Harvard Medical School, an author of the paper, compared blood samples taken from sickle cell patients who had or had not made an emergency trip to the hospital or received a blood transfusion within the previous 12 months, and found that blood from patients with a less severe form of the disease did not slow down as quickly as that of more severely affected patients.

No other existing measures of blood properties — including concentration of red blood cells, fraction of altered hemoglobin or white blood cell count — can make this kind of prediction, Bhatia says. The finding highlights the importance of looking at vaso-occlusion as the result of the interaction of many factors, rather than a single molecular measurement, she says.

To show that this device could be useful for drug development, the researchers also tested a potential sickle cell disease drug called 5-hydroxymethyl furfural, which improves hemoglobin’s ability to bind to oxygen. Adding the drug to blood, they found, dramatically improved how it flowed through the device.

Franklin Bunn, director of hematology research at Brigham and Women’s Hospital, who was not part of this study, says the device could prove very helpful for drug development. “It provides an objective way of assessing new drugs that hopefully will continue to be developed to inhibit the sickling of red blood cells,” Bunn says.

The researchers have applied for a patent on the technology and are now working on developing it as a diagnostic and research tool.

24
EEE / Transistors promise more powerful logic, more logical power
« on: May 09, 2018, 01:08:34 PM »
As the United States seeks to reinvigorate its job market and move past economic recession, MIT News examines manufacturing’s role in the country’s economic future through this series on work at the Institute around manufacturing.

Broadly speaking, the two major areas of research at MIT’s Microsystems Technology Laboratory (MTL) are electronics — transistors in particular — and microelectromechanical systems, or MEMS — tiny mechanical devices with moving parts. Both strains of research could have significant implications for manufacturing in the United States, but at least for the moment, the market for transistor innovation is far larger.

MTL’s Judy Hoyt is proof of the influence that academic research can have on that market. In the 1990s, she helped pioneer the use of “strained silicon” — silicon whose atoms have been pried apart slightly more than normal — to improve the performance of microchips. Intel, most of whose chips are produced in the United States, was the first chipmaker to introduce strained silicon, in 2003. But by now, Hoyt says, the technology has percolated throughout the industry.

Hoyt, a professor of electrical engineering at MIT, argues that U.S. chipmakers have a commercial incentive to keep manufacturing at home. Innovative ideas that afford a competitive advantage often emerge from the very process of building and operating a fabrication facility, Hoyt says. “In the course of doing the manufacturing, additional know-how gets generated,” she says. “If you do design here and do the actual fabrication and manufacturing elsewhere, it’ll work for a while, but in the long run, you can lose early access to the most advanced technologies.” Indeed, Intel — whose domination of the microprocessor market depends in large part on always being a few steps ahead of its competition technologically — recently began building a massive new production facility in Arizona.

Hoyt continues to research chip technologies that are an even more radical departure from the norm than strained silicon was in the 1990s — and will thus require “additional know-how” to produce, she says. One approach is building transistors from strained nanowires of silicon or germanium. In particular, Hoyt says, her group is investigating techniques for fabricating the nanowires that would make mass production viable. She’s also collaborating with several other MTL researchers on what she describes as “new concepts about how a digital switch might operate” — concepts that, for example, involve quantum tunneling, a counterintuitive physical phenomenon in which a subatomic particle seems to magically pass through a physical barrier.

Compound interest

MTL’s Jesus del Alamo, on the other hand, researches ways to keep the existing transistor design going as long as possible. That design hasn’t changed much in the last 50 years, but chipmakers are finally bumping up against the fundamental physical limits of silicon. Del Alamo studies compound semiconductors — so called because they combine multiple elements, such as gallium, indium and arsenic — whose electrical properties offer advantages over silicon. If chipmakers can successfully introduce these exotic materials into their fabrication processes, they might be able to continue to improve chip performance without abandoning existing chip designs.

That’s a big “if,” however, as the chemicals and conditions required to create compound-semiconductor circuitry are incompatible with the processes currently used to produce silicon chips. The first chipmaker to reconcile the two could enjoy a pronounced competitive advantage.

“It was really Intel’s leadership about six, seven years ago that focused attention on the difficulties of traditional silicon scaling,” says del Alamo, the Donner Professor of Electrical Engineering at MIT. “For years, it was only Intel, ourselves and slowly, a small trickle of other universities that started looking at this. To the extent that U.S. industry in the form of, essentially, Intel is three, four years ahead of the rest of the world, then we might have an edge in this technology that might have implications for manufacturing.”

Today, compound semiconductors are used in high-speed electronics, such as the devices that process data in fiber-optic networks. But while compound-semiconductor transistors offer substantial performance gains over silicon transistors, they’re also substantially larger. For years, del Alamo’s group has been working to establish whether compound semiconductors’ advantages will persist even at the small scales required for digital-logic applications. Recently, del Alamo says, using MTL’s own fabrication facilities, his group has produced a compound-semiconductor chip with transistors whose critical dimensions are roughly the same as those of transistors found in commercial microprocessors. The transistors’ performance wasn’t up to the standard that commercial applications would demand, but, del Alamo says, “we think we know why, and we believe we can address the problems. So the technology looks very promising.”

Powering down

Sharing an office suite with del Alamo is Tomás Palacios, an MTL researcher best known for his work with graphene, a material that consists of a single layer of carbon atoms and has remarkable electronic and mechanical properties.

But Palacios’ group is also researching another compound semiconductor, gallium nitride, whose influence on manufacturing could extend well beyond the semiconductor industry. Gallium nitride is commonly used in light-emitting diodes, but Palacios is investigating its application in power electronics — the devices in both the electrical grid and household electronics that change the voltage of electrical power or switch back and forth between alternating and direct current.

“Intrinsically, in the U.S., we are going to have higher wages than in many other places in the world, so if we want to compete with those places, we need to be more efficient,” says Palacios, the Emanuel E. Landsman (1958) Career Development Associate Professor of Electronics at MIT. Approximately 30 percent of the electricity produced in the United States, Palacios explains, goes to manufacturing.

“Most of that electricity is used in powering motors and engines of different kinds,” he says. “Those motors are actually very inefficient. They are being controlled by old power electronics.” Gallium nitride power electronics, Palacios says, could cut the electricity consumed by U.S. manufacturing by more than 30 percent. “For a given voltage, gallium nitride has three orders of magnitude less resistance than conventional silicon-based electronics, so it’s much more efficient,” Palacios says.

The problem is that gallium nitride is such a good conductor that transistors made from it are hard to shut off. Palacios’ group has developed — and filed a couple of patents on — a new transistor design that should address that problem. They continue to refine the design, in the hopes of making gallium nitride power electronics both more efficient and easier to mass produce.

25
EEE / Testing unbuilt chips
« on: May 09, 2018, 01:07:35 PM »
For the last decade or so, computer chip manufacturers have been increasing the speed of their chips by giving them extra processing units, or “cores.” Most major manufacturers now offer chips with eight, 10 or even 12 cores.

But if chips are to continue improving at the rate we’ve grown accustomed to — doubling in power roughly every 18 months — they’ll soon require hundreds and even thousands of cores. Academic and industry researchers are full of ideas for improving the performance of multicore chips, but there’s always the possibility that an approach that seems to work well with 24 or 48 cores may introduce catastrophic problems when the core count gets higher. No chip manufacturer will take a chance on an innovative chip design without overwhelming evidence that it works as advertised.

As a research tool, an MIT group that specializes in computer architecture has developed a software simulator, dubbed Hornet, that models the performance of multicore chips much more accurately than its predecessors do. At the Fifth International Symposium on Networks-on-Chip in 2011, the group took the best-paper prize for work in which they used the simulator to analyze a promising and much-studied multicore-computing technique, finding a fatal flaw that other simulations had missed. And in a forthcoming issue of IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, the researchers present a new version of the simulator that factors in power consumption as well as patterns of communication between cores, the processing times of individual tasks, and memory-access patterns.

The flow of data through a chip with hundreds of cores is monstrously complex, and previous software simulators have sacrificed some accuracy for the sake of efficiency. For more accurate simulations, researchers have typically used hardware models — programmable chips that can be reconfigured to mimic the behavior of multicore chips. According to Myong Hyon Cho, a PhD student in the Department of Electrical Engineering and Computer Science (EECS) and one of Hornet’s developers, Hornet is intended to complement, not compete with, these other two approaches. “We think that Hornet sits in the sweet spot between them,” Cho says.

The various tasks performed by a chip’s many components are synchronized by a master clock; during each “clock cycle,” each component performs one task. Hornet is significantly slower than its predecessors, but it can provide a “cycle-accurate” simulation of a chip with 1,000 cores. “‘Cycle-accurate’ means the results are precise to the level of a single cycle,” Cho explains. “For example, [Hornet has] the ability to say, ‘This task takes 1,223,392 cycles to finish.’”

Existing simulators are good at evaluating chips’ general performance, but they can miss problems that arise only in rare, pathological cases. Hornet is much more likely to ferret those out, as it did in the case of the research presented at the Network-on-Chip Symposium. There, Cho, his adviser and EECS professor Srini Devadas, and their colleagues analyzed a promising multicore-computing technique in which the chip passes computational tasks to the cores storing the pertinent data rather than passing data to the cores performing the pertinent tasks. Hornet identified the risk of a problem called deadlock, which other simulators had missed. (Deadlock is a situation in which some number of cores are waiting for resources — communications channels or memory locations — in use by other cores. No core will abandon the resource it has until it’s granted access to the one it needs, so clock cycles tick by endlessly without any of the cores doing anything.)

In addition to identifying the risk of deadlock, the researchers also proposed a way to avoid it — and demonstrated that their proposal worked with another Hornet simulation. That illustrates Hornet’s advantage over hardware systems: the ease with which it can be reconfigured to test out alternative design proposals.

Building simulations that will run on hardware “is more tricky than just writing software,” says Edward Suh, an assistant professor of electrical and computer engineering at Cornell University, whose group used an early version of Hornet that just modeled communication between cores. “It’s hard to say whether it’s inherently more difficult to write, but at least right now, there’s less of an infrastructure, and students do not know those languages as well as they do regular programming language. So as of right now, it’s more work.” Hornet, Suh says, could have advantages in situations where “you want to test out several ideas quickly, with good accuracy.”

Suh points out, however, that because Hornet is slower than either hardware simulations or less-accurate software simulations, “you tend to simulate a short period of the application rather than trying to run the whole application.” But, he adds, “That’s definitely useful if you want to know if there are some abnormal behaviors.” And furthermore, “there are techniques people use, like statistical sampling, or things like that, to say, ‘these are representative portions of the application.’”

26
EEE / Fiber laser points to woven 3-D displays
« on: May 09, 2018, 01:06:53 PM »
Most light emitters, from candles to light bulbs to computer screens, look the same from any angle. But in a paper published this week on the Nature Photonics website, MIT researchers report the development of a new light source — a fiber only a little thicker than a human hair — whose brightness can be controllably varied for different viewers.

The fiber thus opens the possibility of 3-D displays woven from flexible fibers that project different information to viewers’ left and right eyes. The fiber could also enable medical devices that can be threaded into narrow openings to irradiate diseased tissue, selectively activating therapeutic compounds while leaving healthy tissue untouched.

The paper is the work of seven researchers affiliated with MIT’s Research Laboratory of Electronics (RLE), including Yoel Fink, a professor of materials science and electrical engineering and the RLE’s director; John Joannopoulos, the Francis Wright Davis Professor of Physics; lead author Alexander Stolyarov, a graduate student at Harvard University who is doing is PhD research with Fink’s group; and Lei Wei, a postdoc at RLE. The work was funded by the U.S. Army and the National Science Foundation, through MIT’s Institute for Soldier Nanotechnologies and Center for Materials Science and Engineering.

The newly developed fiber has a hollow core; surrounding this core are alternating layers of materials with different optical properties, which together act as a mirror. In the core is a droplet of fluid that can be moved up and down the fiber. When the droplet receives energy, or is “pumped” — in experiments, the researchers used another laser to pump the droplet — it emits light. The light bounces back and forth between the mirrors, emerging from the core as a 360-degree laser beam.

Surrounding the core are four channels filled with liquid crystals, which vary the brightness of the emitted light; each liquid-crystal channel is controlled by two electrode channels running parallel to it. Yet despite the complexity of its structure, the fiber is only 400 micrometers across. (A human hair is usually somewhere around 100 micrometers in diameter.)

In experiments, the researchers simultaneously activated liquid crystals on opposite sides of the fiber to investigate a hypothetical application in which a transparent, woven display would present the same image to viewers on both sides — not mirror images, as a display that emitted light uniformly would. But in principle, Stolyarov says, there’s no reason a fiber couldn’t have many liquid-crystal channels that vary the light intensity in several different directions. “You can build as many of these liquid-crystal channels as you want around the laser,” Stolyarov says. “The process is very scalable.”

As a display technology, the fibers have the obvious drawback that each of them provides only one image pixel. To make the fibers more useful, the researchers are investigating the possibility that the single pixel — the droplet of water — could oscillate back and forth fast enough to fool the viewer into perceiving a line rather than a colored point.

Even before the researchers answer that question, however, the fiber could prove useful in the burgeoning field of photodynamic therapy, in which light activates injected therapeutic compounds only at targeted locations.

“The coolest thing about this work, really, is the way it’s made,” says Marko Loncar, an associate professor of electrical engineering at Harvard University. “The technology that they used to do it, basically, they can make kilometers of these things. It’s remarkable.”

Loncar adds, “And they envision this being used for surgeries and things like that, where it would be really hard to use any other laser approach.”

Loncar also thinks that the problem of pumping the fluid droplet back and forth to produce images is probably soluble. “There are entire lasers that depend on microfluidics,” he says. “The handling of fluids on a small scale nowadays is a pretty developed technology. So I don’t see this as a major obstacle.”

27
EEE / Chips as mini Internets
« on: May 09, 2018, 01:06:14 PM »
Today, a typical chip might have six or eight cores, all communicating with each other over a single bundle of wires, called a bus. With a bus, however, only one pair of cores can talk at a time, which would be a serious limitation in chips with hundreds or even thousands of cores, which many electrical engineers envision as the future of computing.

Li-Shiuan Peh, an associate professor of electrical engineering and computer science at MIT, wants cores to communicate the same way computers hooked to the Internet do: by bundling the information they transmit into “packets.” Each core would have its own router, which could send a packet down any of several paths, depending on the condition of the network as a whole.

At the Design Automation Conference in June, Peh and her colleagues will present a paper she describes as “summarizing 10 years of research” on such “networks on chip.” Not only do the researchers establish theoretical limits on the efficiency of packet-switched on-chip communication networks, but they also present measurements performed on a test chip in which they came very close to reaching several of those limits.

Last stop for buses

In principle, multicore chips are faster than single-core chips because they can split up computational tasks and run them on several cores at once. Cores working on the same task will occasionally need to share data, but until recently, the core count on commercial chips has been low enough that a single bus has been able to handle the extra communication load. That’s already changing, however: “Buses have hit a limit,” Peh says. “They typically scale to about eight cores.” The 10-core chips found in high-end servers frequently add a second bus, but that approach won’t work for chips with hundreds of cores.

For one thing, Peh says, “buses take up a lot of power, because they are trying to drive long wires to eight or 10 cores at the same time.” In the type of network Peh is proposing, on the other hand, each core communicates only with the four cores nearest it. “Here, you’re driving short segments of wires, so that allows you to go lower in voltage,” she explains.

In an on-chip network, however, a packet of data traveling from one core to another has to stop at every router in between. Moreover, if two packets arrive at a router at the same time, one of them has to be stored in memory while the router handles the other. Many engineers, Peh says, worry that these added requirements will introduce enough delays and computational complexity to offset the advantages of packet switching. “The biggest problem, I think, is that in industry right now, people don’t know how to build these networks, because it has been buses for decades,” Peh says.

Forward thinking

Peh and her colleagues have developed two techniques to address these concerns. One is something they call “virtual bypassing.” In the Internet, when a packet arrives at a router, the router inspects its addressing information before deciding which path to send it down. With virtual bypassing, however, each router sends an advance signal to the next, so that it can preset its switch, speeding the packet on with no additional computation. In her group’s test chips, Peh says, virtual bypassing allowed a very close approach to the maximum data-transmission rates predicted by theoretical analysis.

The other technique is something called low-swing signaling. Digital data consists of ones and zeroes, which are transmitted over communications channels as high and low voltages. Sunghyun Park, a PhD student advised by both Peh and Anantha Chandrakasan, the Joseph F. and Nancy P. Keithley Professor of Electrical Engineering, developed a circuit that reduces the swing between the high and low voltages from one volt to 300 millivolts. With its combination of virtual bypassing and low-swing signaling, the researchers’ test chip consumed 38 percent less energy than previous packet-switched test chips. The researchers have more work to do, Peh says, before their test chip’s power consumption gets as close to the theoretical limit as its data transmission rate does. But, she adds, “if we compare it against a bus, we get orders-of-magnitude savings.”

Luca Carloni, an associate professor of computer science at Columbia University who also researches networks on chip, says “the jury is always still out” on the future of chip design, but that “the advantages of packet-switched networks on chip seem compelling.” He emphasizes that those advantages include not only the operational efficiency of the chips themselves, but also “a level of regularity and productivity at design time that is very important.” And within the field, he adds, “the contributions of Li-Shiuan are foundational.”

28
EEE / Sensing when the brain is under pressure
« on: May 09, 2018, 01:05:27 PM »
Brain tumors and head trauma, including concussions, can elevate pressure inside the skull, potentially crushing brain tissue or cutting off the brain’s blood supply. Monitoring pressure in the brains of such patients could help doctors determine the best treatment, but the procedure is so invasive — it requires drilling a hole through the skull — that it is done only in the most severely injured patients.

That may change with the development of a new technique that is much less risky. The method, described in the April 11 issue of Science Translational Medicine, could allow doctors to measure brain pressure in patients who have suffered head injuries that are milder, but would benefit from close monitoring.

Developed by researchers in MIT’s Research Laboratory of Electronics (RLE), the new technique is based on a computer model of how blood flows through the brain. Using that model, the researchers can calculate brain pressure from two less invasive measurements: arterial blood pressure and an ultrasound measurement of the velocity of blood flow through the brain.

With this approach, changes in brain pressure can be monitored over time, alerting doctors to problems that might build up slowly.

Under pressure

Pressure in the brain, also known as intracranial pressure (ICP), can rise due to the presence of excessive fluid (blood or cerebrospinal fluid), a brain tumor or swelling of the brain.

To measure this pressure, neurosurgeons drill a hole in the skull and insert a catheter into the brain tissue or a fluid-filled cavity in the brain. In all but the most critically ill patients, the risk of infection or damage to the brain outweighs the benefits of this procedure, says study co-author George Verghese, the Henry Ellis Warren Professor of Electrical Engineering at MIT.

“There’s a much larger patient population for whom physicians would like this measurement, but the invasiveness stops them from obtaining it,” says Verghese, whose lab focuses on using computer models of human physiology to interpret patient data.

In his PhD thesis, Faisal Kashif, who is now a postdoc in Verghese’s lab and the lead author on the paper, developed a computer model that relates arterial blood pressure and blood flow through the brain to pressure in the brain. The flow of blood through the brain is caused by the difference in pressure between the blood entering the brain and pressure inside the brain (ICP). Therefore, using Kashif’s model, ICP can be calculated from the flow and the pressure of blood entering the brain.

The pressure of blood entering the brain is not directly measurable, so the MIT team used radial arterial pressure, taken by inserting a catheter at the wrist, as a proxy for that measurement. They then used their model of blood flow to compensate for the difference in location.

Peripheral arterial pressure can also be measured continuously and noninvasively by using a finger cuff similar to the arm cuff commonly used to measure blood pressure. The researchers are now investigating whether data obtained this way is accurate enough to use in their model.

Validation

The researchers verified the accuracy of their technique using data collected several years ago by collaborator Marek Czosnyka at Cambridge University in the U.K., from patients with traumatic brain injury. This was one of the few data sets that included all the measurements they needed, along with the proper time stamps. Czosnyka sent the data on radial arterial blood pressure and ultrasound blood flow velocity to the MIT team, which then ran the numbers through their model and came up with an estimated ICP. They then sent that back to Czosnyka for comparison.

Their results were slightly less accurate than those obtained with the best invasive procedures, but comparable to other invasive procedures that are still in clinical use, and to some less invasive techniques that have been tried.

“It’s a holy grail of clinical neurosurgery to find a noninvasive way to measure pressure,” says James Holsapple, chief of neurosurgery at Boston Medical Center. “It would be a big step if we could get our hands on something reliable.”

The new MIT approach shows promise, Holsapple says, adding that an important next step is to incorporate the technology into a system that would be easy for hospital staff to use and could record data over many hours or days.

The MIT team, along with co-author Vera Novak of Beth Israel Deaconess Medical Center (BIDMC) in Boston, is now collaborating with doctors at BIDMC to test their approach on patients in the neurosurgical intensive care unit.

“It’s still at the validation stage. To convince people that this works, you need to build up more [data] than we currently have,” Verghese says. “Our hope is that once it’s been validated on additional sorts of patients, where you’re able to show that you can match what the invasive measurement is, people will have confidence in starting to apply it to patients who are currently not getting monitored. That’s where we see the big potential.”

Thomas Heldt, a research scientist in RLE and senior author of the paper, says that once the data collection and model are well-established, the team hopes to test different patient populations — such as athletes with concussions, or soldiers who have experienced explosions — to come up with ways to determine the extent of injury and when it is not yet safe for an athlete or soldier to return to the field.

Another potential application is monitoring astronauts during and after long space flights. NASA has observed signs of elevated ICP in some of these astronauts, and is now seeking new ways to measure it.

29
EEE / Their own devices
« on: May 09, 2018, 01:04:48 PM »
As the United States seeks to reinvigorate its job market and move past economic recession, MIT News examines manufacturing’s role in the country’s economic future through this series on work at the Institute around manufacturing.

Basic advances in medicine create a need for further medical advances, among other things. The general increase in life expectancy over the last several decades, in America and globally, means that more people require more kinds of sophisticated medical care later in life. Indeed, numerous economic studies show that the demand for health care in the United States, for one, will only increase over the long run.

Therefore, while medicine has kept evolving — we now benefit from surgical techniques and drugs that did not exist two decades ago — the unrelenting need for medical innovation remains intact. MIT’s Medical Electronic Device Realization Center (MEDRC) aims to address this issue by using advances in microelectronics to help develop new medical technologies that could be manufactured at large volumes.

“The microelectronics industry has changed computation, communications and consumer electronics in a big way,” says Charles Sodini, the Clarence J. LeBel Professor of Electrical Engineering at MIT and a co-founder of the MEDRC. “The medical device industry looks like one the microelectronics industry could affect as well.”

Sodini and the other co-founders of the MEDRC — Brian W. Anthony, a research scientist in MIT’s Laboratory for Manufacturing and Productivity, and Joel Voldman, an associate professor in MIT’s Department of Electrical Engineering and Computer Science — have identified five current research focuses. These include noninvasive tools to gather internal bodily data; better ultrasound; wearable devices that monitor vital signs; devices to produce faster results for clinical blood, urine and saliva tests; and better ways of extracting useful information from tests. Their long-range goal is to help medical care become more accurate, quicker and more flexible for doctors and patients alike.

MEDRC’s directors have recruited industry partners and are actively discussing their ideas with a broad set of doctors, engineers and corporate executives in forums such as a workshop they are hosting in May.

As one product of these talks, this month the MEDRC signed a three-year agreement with Maxim Integrated Products, a semiconductor firm based in Sunnyvale, Calif. Currently, MIT and Maxim researchers are working to produce a device for measuring intracranial pressure (ICP), or the pressure inside the skull, without having to operate on patients. That makes Maxim the third firm to partner with the MEDRC, which was founded with initial support from General Electric and Analog Devices — whose founder, Ray Stata ’57, SM ’58 is a life member emeritus of the MIT Corporation, the Institute’s board of trustees.

“We conceived of the MEDRC in order to bring these kinds of companies together here, along with the technologists at MIT and the physicians across the river [in Boston],” Sodini says. “We want to increase the interactions between [them].”

Searching for maximum performance

Consider the case of Maxim, a firm that designs integrated circuits for a wide range of computing and consumer products and is now expanding its reach in the medical-device industry. Many people who are at risk of stroke or have endured a serious head injury can benefit from measurement of their ICP; an increase can often signal a dangerous decrease in blood supply to the brain. But obtaining such information is very difficult to do, since it currently entails surgically drilling into the skull and placing a transducer inside.

It would be useful, then, if a noninvasive method of measuring ICP could be developed: About 800,000 adults in the United States have strokes every year, making strokes the third-leading cause of death in the country.

Starting this spring as part of the MIT-Maxim partnership, Maxim design director Brian Brandt, an electrical engineer who has been working in the company’s Chelmsford, Mass., office, is now a visiting scientist at MIT; Brandt is working alongside an engineering graduate student, Sabino Pietrangelo, who is developing a noninvasive ICP-measuring prototype. In all, Maxim will provide funding for three MIT PhD students over the three-year period.

To be sure, measuring ICP is a difficult task that interests many researchers; last week, scientists in MIT’s Research Laboratory of Electronics released a new paper on the topic. But by creating a structure for prototyping and funding to back it, the MEDRC concept, as Anthony notes, is intended to lead new concepts out of technology’s so-called  “Valley of Death,” in which promising ideas languish in labs, uncommercialized. And Maxim, for its part, sees its initial MEDRC project as part of a larger investment in health care. Currently Maxim is identifying a second medical product area to research at MIT.

“This is the best place in the world to do something like MEDRC,” says Brandt, noting the confluence of medical firms, academic researchers and high-powered hospitals in the Cambridge and Boston region.

‘Trying to change the way people think’ about medical tools

The MEDRC requires that industry partners have one scientist in residence on campus, to better facilitate intellectual collaboration with other researchers. GE, for instance, has sent to campus Kai Thomenius, the chief technologist of GE Global Research and a recognized expert in ultrasound, to enhance its own effort to make basic ultrasound more consistent and easier to use.

General-purpose ultrasound devices are also a longtime interest of Anthony, who is also the director of MIT’s Master of Engineering in Manufacturing program. The goal, Anthony says, is to develop everyday ultrasound devices that are both easier to use and provide consistent points of reference for multiple ultrasound scans over time.

“Ultimately what you would like to do in any medical-imaging scenario is to have an image, and examine if an object in that image, a tumor or vessel, has changed, and track that over time,” Anthony says. The device being developed at MEDRC would ensure that ultrasound images are taken from the same point in two ways: by recognizing the patient’s skin where the device rests, and from similarities in the ultrasound images themselves.

Analog Devices is more focused on developing smaller devices that can measure vital signs — heart rate and blood pressure, among others — on mobile platforms. That could enable data to be gathered while patients are at home, not only in clinics or hospitals. “The whole idea of continuous monitoring versus spot-checking, when you go see a physician or clinician, is what’s being enabled by many of these diagnostic devices,” Sodini says.

Many other kinds of diagnoses, he adds, could be made more readily with the help, in part, of microelectronics. “The way we do lab tests today is the equivalent of the way computing was done when you handed a deck of cards to the computer operator and he gave you back a big printout from the IBM blue machine back there somewhere,” Sodini says. “Instead, you want to bring that equipment down as close to the customer or patient as you can.” It is possible, he offers, that “somebody who’s bedridden and needs these tests might be able to live at home instead of having to live in the hospital or a care center.”

The MEDRC co-founders also suggest it would be desirable to forge partnerships with some of the Boston area’s top-tier pharmaceutical companies, since nimbler diagnostic tools could help doctors better track the effects of medicines, or even help new products move through the lengthy clinical trials process.

“We’re not just trying to lower the costs of today’s technology,” Sodini says. “We’re trying to change the way people think about using those tools.”

30
EEE / Algorithmic incentives
« on: May 09, 2018, 01:03:25 PM »
In 1993, MIT cryptography researchers Shafi Goldwasser and Silvio Micali shared in the first Gödel Prize for theoretical computer science for their work on interactive proofs — a type of mathematical game in which a player attempts to extract reliable information from an unreliable interlocutor.

In their groundbreaking 1985 paper on the topic, Goldwasser, Micali and the University of Toronto’s Charles Rackoff ’72, SM ’72, PhD ’74 proposed a particular kind of interactive proof, called a zero-knowledge proof, in which a player can establish that he or she knows some secret information without actually revealing it. Today, zero-knowledge proofs are used to secure transactions between financial institutions, and several startups have been founded to commercialize them.

At the Association for Computing Machinery’s Symposium on Theory of Computing in May, Micali, the Ford Professor of Engineering at MIT, and graduate student Pablo Azar will present a new type of mathematical game that they’re calling a rational proof; it varies interactive proofs by giving them an economic component. Like interactive proofs, rational proofs may have implications for cryptography, but they could also suggest new ways to structure incentives in contracts.

“What this work is about is asymmetry of information,” Micali says. “In computer science, we think that valuable information is the output of a long computation, a computation I cannot do myself.” But economists, Micali says, model knowledge as a probability distribution that accurately describes a state of nature. “It was very clear to me that both things had to converge,” he says.

A classical interactive proof involves two players, sometimes designated Arthur and Merlin. Arthur has a complex problem he needs to solve, but his computational resources are limited; Merlin, on the other hand, has unlimited computational resources but is not trustworthy. An interactive proof is a procedure whereby Arthur asks Merlin a series of questions. At the end, even though Arthur can’t solve his problem himself, he can tell whether the solution Merlin has given him is valid.

In a rational proof, Merlin is still untrustworthy, but he’s a rational actor in the economic sense: When faced with a decision, he will always choose the option that maximizes his economic reward. “In the classical interactive proof, if you cheat, you get caught,” Azar explains. “In this model, if you cheat, you get less money.”

Complexity connection

Research on both interactive proofs and rational proofs falls under the rubric of computational-complexity theory, which classifies computational problems according to how hard they are to solve. The two best-known complexity classes are P and NP. Roughly speaking, P is a set of relatively easy problems, while NP contains some problems that, as far as anyone can tell, are very, very hard.

Problems in NP include the factoring of large numbers, the selection of an optimal route for a traveling salesman, and so-called satisfiability problems, in which one must find conditions that satisfy sets of logical restrictions. For instance, is it possible to contrive an attendance list for a party that satisfies the logical expression (Alice OR Bob AND Carol) AND (David AND Ernie AND NOT Alice)? (Yes: Bob, Carol, David and Ernie go to the party, but Alice doesn’t.) In fact, the vast majority of the hard problems in NP can be recast as satisfiability problems.

To get a sense of how rational proofs work, consider the question of how many solutions a satisfiability problem has — an even harder problem than finding a single solution. Suppose that the satisfiability problem is a more complicated version of the party-list problem, one involving 20 invitees. With 20 invitees, there are 1,048,576 possibilities for the final composition of the party. How many of those satisfy the logical expression? Arthur doesn’t have nearly enough time to test them all.

But what if Arthur instead auctions off a ticket in a lottery? He’ll write down one perfectly random list of party attendees — Alice yes, Bob no, Carol yes and so on — and if it satisfies the expression, he’ll give the ticketholder $1,048,576. How much will Merlin bid for the ticket?

Suppose that Merlin knows that there are exactly 300 solutions to the satisfiability problem. The chances that Arthur’s party list is one of them are thus 300 in 1,048,576. According to standard econometric analysis, a 300-in-1,048,576 shot at $1,048,576 is worth exactly $300. So if Merlin is a rational actor, he’ll bid $300 for the ticket. From that information, Arthur can deduce the number of solutions.

First-round knockout

The details are more complicated than that, and of course, with very few exceptions, no one in the real world wants to be on the hook for a million dollars in order to learn the answer to a math problem. But the upshot of the researchers’ paper is that with rational proofs, they can establish in one round of questioning — “What do you bid?” — what might require millions of rounds using classical interactive proofs. “Interaction, in practice, is costly,” Azar says. “It’s costly to send messages over a network. Reducing the interaction from a million rounds to one provides a significant savings in time.”

“I think it’s yet another case where we think we understand what’s a proof, and there is a twist, and we get some unexpected results,” says Moni Naor, the Judith Kleeman Professorial Chair in the Department of Computer Science and Applied Mathematics at Israel’s Weizmann Institute of Science. “We’ve seen it in the past with interactive proofs, which turned out to be pretty powerful, much more powerful than you normally think of proofs that you write down and verify as being.” With rational proofs, Naor says, “we have yet another twist, where, if you assign some game-theoretical rationality to the prover, then the proof is yet another thing that we didn’t think of in the past.”

Naor cautions that the work is “just at the beginning,” and that it’s hard to say when it will yield practical results, and what they might be. But “clearly, it’s worth looking into,” he says. “In general, the merging of the research in complexity, cryptography and game theory is a promising one.”

Micali agrees. “I think of this as a good basis for further explorations,” he says. “Right now, we’ve developed it for problems that are very, very hard. But how about problems that are very, very simple?” Rational-proof systems that describe simple interactions could have an application in crowdsourcing, a technique whereby computational tasks that are easy for humans but hard for computers are farmed out over the Internet to armies of volunteers who receive small financial rewards for each task they complete. Micali imagines that they might even be used to characterize biological systems, in which individual organisms — or even cells — can be thought of as producers and consumers.

Pages: 1 [2] 3 4 ... 6