Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Rohan Sarker

Pages: 1 2 [3] 4 5 6
EEE / Thwarting the cleverest attackers
« on: May 09, 2018, 01:02:42 PM »
In the last 10 years, cryptography researchers have demonstrated that even the most secure-seeming computer is shockingly vulnerable to attack. The time it takes a computer to store data in memory, fluctuations in its power consumption and even the noises it emits can betray information to a savvy assailant.

Attacks that use such indirect sources of information are called side-channel attacks, and the increasing popularity of cloud computing makes them an even greater threat. An attacker would have to be pretty motivated to install a device in your wall to measure your computer’s power consumption. But it’s comparatively easy to load a bit of code on a server in the cloud and eavesdrop on other applications it’s running.

Fortunately, even as they’ve been researching side-channel attacks, cryptographers have also been investigating ways of stopping them. Shafi Goldwasser, the RSA Professor of Electrical Engineering and Computer Science at MIT, and her former student Guy Rothblum, who’s now a researcher at Microsoft Research, recently posted a long report on the website of the Electronic Colloquium on Computational Complexity, describing a general approach to mitigating side-channel attacks. At the Association for Computing Machinery’s Symposium on Theory of Computing (STOC) in May, Goldwasser and colleagues will present a paper demonstrating how the technique she developed with Rothblum can be adapted to protect information processed on web servers.

In addition to preventing attacks on private information, Goldwasser says, the technique could also protect devices that use proprietary algorithms so that they can’t be reverse-engineered by pirates or market competitors — an application that she, Rothblum and others described at last year’s AsiaCrypt conference.

Today, when a personal computer is in use, it’s usually running multiple programs — say, a word processor, a browser, a PDF viewer, maybe an email program or a spreadsheet program. All the programs are storing data in memory, but the laptop’s operating system won’t let any program look at the data stored by any other. The operating systems running on servers in the cloud are no different, but a malicious program could launch a side-channel attack simply by sending its own data to memory over and over again. From the time the data storage and retrieval takes, it can infer what the other programs are doing with remarkable accuracy.

Goldwasser and Rothblum’s technique obscures the computational details of a program, whether it’s running on a laptop or a server. Their system converts a given computation into a sequence of smaller computational modules. Data fed into the first module is encrypted, and at no point during the module’s execution is it decrypted. The still-encrypted output of the first module is fed into the second module, which encrypts it in yet a different way, and so on.

The encryption schemes and the modules are devised so that the output of the final module is exactly the output of the original computation. But the operations performed by the individual modules are entirely different. A side-channel attacker could extract information about how the data in any given module is encrypted, but that won’t let him deduce what the sequence of modules does as a whole. “The adversary can take measurements of each module,” Goldwasser says, “but they can’t learn anything more than they could from a black box.”

The report by Goldwasser and Rothblum describes a type of compiler, a program that takes code written in a form intelligible to humans and converts it into the low-level instructions intelligible to a computer. There, the computational modules are an abstraction: The instruction that inaugurates a new module looks no different from the instruction that concluded the last one. But in the STOC paper, the modules are executed on different servers on a network.

According to Nigel Smart, a professor of cryptology in the computer science department at the University of Bristol in England, the danger of side-channel attacks “has been known since the late ’90s.”

“There’s a lot of engineering that was done to try to prevent this from being a problem,” Smart says, “a huge amount of engineering work. This is a megabucks industry.” Much of that work, however, has relied on trial and error, Smart says. Goldwasser and Rothblum’s study, on the other hand, “is a much more foundational study, looking at really foundational, deep questions about what is possible.”

Moreover, Smart says, previous work on side-channel attacks tended to focus on the threat posed to handheld devices, such as cellphones and smart cards. “It would seem to me that the stuff that is more likely to take off in the long run is the stuff that’s talking about servers,” Smart says. “I don’t know anyone else outside MIT who’s looking at that.”

Smart cautions, however, that the work of Goldwasser and her colleagues is unlikely to yield practical applications in the near future. “In security, and especially cryptography, it takes a long time to go from an academic idea to something that’s actually used in the real world,” Smart says. “They’re looking at what could be possible in 10, 20 years’ time.”

EEE / In search of new ways of producing nano-materials
« on: May 09, 2018, 01:02:01 PM »
A life in academia was a natural career path for Jing Kong, the daughter of two Chinese academics at Tianjin Finance and Economics University: Her father taught and was editor of a journal, and her mother was in the university’s foreign trade department and later worked with graduate students.

Last year, after seven years at MIT, Kong was granted tenure as the ITT Career Development Associate Professor of Electrical Engineering.

Her interest in science and technology started, as it does for many people, with an inspiring teacher. “I had many very good teachers,” Kong recalls, but there was “one in particular, a teacher of physics in middle school , Baoyi Liu. He gave me a lot of encouragement, and helped me to be interested” in the subject.

While in high school, “I took part in math, physics and chemistry competitions. I was chosen for the preparation class for a chemistry Olympics,” Kong says. Although she didn’t end up being chosen for the team, her year of preparation for the event at a Beijing high school entitled her to admission to one of China’s top universities. She chose Beijing University because of its close proximity to her hometown.

Kong studied English during her undergraduate years, encouraged by many of her classmates who were planning to go abroad to finish their studies. After graduating in 1997, she decided to attend Stanford University for graduate studies — because, she says, its acceptance letter was first to arrive.

At Stanford, Kong began studying carbon nanotubes, microscopic cylinders formed by single-atom-thick sheets of carbon, which were by then a hot research area. She credits her “very talented” thesis advisor, Hongjie Dai, for the fact that her research in that field, which focused on finding better ways of synthesizing the material, was “very fruitful, and produced quite a lot of publications.”

Working with several other graduate students, Kong found what turned out to be a very effective way of improving the production of nanotubes and controlling their growth, which made it much easier to produce electronic devices from them. “It turned out to be very useful,” she says, and the team shared the technique with many other research groups.

While she enjoyed her work at Stanford, Kong eventually felt burned out and a bit lost, and began questioning the meaning of her efforts — so she joined a campus evangelical fellowship. At first, she says, “I was very much resistant to that idea that there is a God, but my perception changed after a seminar and discussion there.” By the time she graduated, she recalls, she had become a Christian; ever since, she says, her faith has played “a critical role in my life.”

Kong’s first job after earning her doctorate was as a researcher at NASA’s Ames Research Center, near the Stanford campus. (Her husband, He Dong, an electrical engineer whom she had met at Beijing University and then married while pursuing her doctorate, already had a job in the Bay Area.) But she found pure research unsatisfying, and longed to return to an academic environment where she could work with students and spend her life in a more meaningful way, sharing her religious faith with others. She received an offer from MIT, and after a brief stint as a postdoc at Delft University in the Netherlands, she started work at the Institute in 2004. She and her husband are now raising two daughters.

Kong’s research at MIT has continued to focus on carbon nanomaterials, including nanotubes and graphene sheets. She has pioneered a new method of producing large sheets of graphene — previously available only in tiny flakes — and is continuing to work on improving the method. “I want to improve the quality of the material we make, and share the methods with colleagues,” she says.

With carbon nanotubes, she has focused on developing ways to use the tiny structures as extremely sensitive chemical detectors for toxic gases, and ways of integrating them into new kinds of electronic devices.

Kong is emphatic about what is most important to her. “The research is only a platform for me to do God’s work,” she says. “His creation, the way he made this world, is very interesting. It’s amazing, really.”

EEE / The elusive capacity of networks
« on: May 09, 2018, 01:01:20 PM »
In its early years, information theory — which grew out of a landmark 1948 paper by MIT alumnus and future professor Claude Shannon — was dominated by research on error-correcting codes: How do you encode information so as to guarantee its faithful transmission, even in the presence of the corrupting influences engineers call "noise"?

Recently, one of the most intriguing developments in information theory has been a different kind of coding, called network coding, in which the question is how to encode information in order to maximize the capacity of a network as a whole. For information theorists, it was natural to ask how these two types of coding might be combined: If you want to both minimize error and maximize capacity, which kind of coding do you apply where, and when do you do the decoding?

What makes that question particularly hard to answer is that no one knows how to calculate the data capacity of a network as a whole — or even whether it can be calculated. Nonetheless, in the first half of a two-part paper, which was published recently in IEEE Transactions on Information Theory, MIT's Muriel Médard, California Institute of Technology's Michelle Effros and the late Ralf Koetter of the University of Technology in Munich show that in a wired network, network coding and error-correcting coding can be handled separately, without reduction in the network's capacity. In the forthcoming second half of the paper, the same researchers demonstrate some bounds on the capacities of wireless networks, which could help guide future research in both industry and academia.

A typical data network consists of an array of nodes — which could be routers on the Internet, wireless base stations or even processing units on a single chip — each of which can directly communicate with a handful of its neighbors. When a packet of data arrives at a node, the node inspects its addressing information and decides which of several pathways to send it along.

Calculated confusion

With network coding, on the other hand, a node scrambles together the packets it receives and sends the hybrid packets down multiple paths; at each subsequent node they're scrambled again in different ways. Counterintuitively, this can significantly increase the capacity of the network as a whole: Hybrid packets arrive at their destination along multiple paths. If one of those paths is congested, or if one of its links fails outright, the packets arriving via the other paths will probably contain enough information that the recipient can piece together the original message.

But each link between nodes could be noisy, so the information in the packets also needs to be encoded to correct for errors. "Suppose that I'm a node in a network, and I see a communication coming in, and it is corrupted by noise," says Médard, a professor of electrical engineering and computer science. "I could try to remove the noise, but by doing that, I'm in effect making a decision right now that maybe would have been better taken by someone downstream from me who might have had more observations of the same source."

On the other hand, Médard says, if a node simply forwards the data it receives without performing any error correction, it could end up squandering bandwidth. "If the node takes all the signal it has and does not whittle down his representation, then it might be using a lot of energy to transmit noise," she says. "The question is, how much of the noise do I remove, and how much do I leave in?"

In their first paper, Médard and her colleagues analyze the case in which the noise in a given link is unrelated to the signals traveling over other links, as is true of most wired networks. In that case, the researchers show, the problems of error correction and network coding can be separated without limiting the capacity of the network as a whole.

Noisy neighbors

In the second paper, the researchers tackle the case in which the noise on a given link is related to the signals on other links, as is true of most wireless networks, since the transmissions of neighboring base stations can interfere with each other. This complicates things enormously: Indeed, Médard points out, information theorists still don't know how to quantify the capacity of a simple three-node wireless network, in which two nodes relay messages to each other via a third node.

Nonetheless, Médard and her colleagues show how to calculate upper and lower bounds on the capacity of a given wireless network. While the gap between the bounds can be very large in practice, knowing the bounds could still help network operators evaluate the benefits of further research on network coding. If the observed bit rate on a real-world network is below the lower bound, the operator knows the minimum improvement that the ideal code would provide; if the observed rate is above the lower bound but below the upper, then the operator knows the maximum improvement that the ideal code might provide. If even the maximum improvement would afford only a small savings in operational expenses, the operator may decide that further research on improved coding isn't worth the money.

"The separation theorem they proved is of fundamental interest," says Raymond Yeung, a professor of information engineering and co-director of the Institute of Network Coding at the Chinese University of Hong Kong. "While the result itself is not surprising, it is somewhat unexpected that they were able to prove the result in such a general setting."

Yeung cautions, however, that while the researchers have "decomposed a very difficult problem into two," one of those problems "remains very difficult. … The bound is in terms of the solution to another problem which is difficult to solve," he says. "It is not clear how tight this bound is; that needs further research."

Two years ago, Martin Rinard's group at MIT's Computer Science and Artificial Intelligence Laboratory proposed a surprisingly simple way to make some computer procedures more efficient: Just skip a bunch of steps. Although the researchers demonstrated several practical applications of the technique, dubbed loop perforation, they realized it would be a hard sell. "The main impediment to adoption of this technique," Imperial College London's Cristian Cadar commented at the time, "is that developers are reluctant to adopt a technique where they don't exactly understand what it does to the program."

Loop perforation is just one example of a way in which computer scientists are looking to trade a little bit of accuracy for substantial gains in performance. Others include high-speed chips that yield slightly inaccurate answers to arithmetic problems and low-power memory circuits that don't guarantee perfectly faithful storage. But all of these approaches provoke skepticism among engineers: If a computing system is intrinsically unreliable, how do we know it won't fail catastrophically?

At the Association for Computing Machinery's Conference on Programming Language Design and Implementation in June, Rinard's group will present a new mathematical framework that allows computer scientists to reason rigorously about sloppy computation. The framework can provide mathematical guarantees that if a computer program behaves as intended, so will a fast-but-inaccurate modification of it.

"Loop perforation shares with a lot of the research we've done this kind of happy-go-lucky, let's-give-it-a-go-and-see-what-happens approach," says Rinard, a professor in MIT's Department of Electrical Engineering and Computer Science. "But once you observe a phenomenon, it helps to understand why you see what you see and to put a formal framework around it."

Incentive structure

The new research, which also involved lead author Michael Carbin and his fellow graduate students Deokhwan Kim and Sasa Misailovic, fits into the general category of formal verification. Verification is a method for mathematically proving that a program does what it's supposed to. It's used in hardware design, in academic work on algorithms and in the development of critical software that can't tolerate bugs. But because it's labor intensive, it's rarely used in the development of commercial applications.

That's starting to change, however, as automated verification tools become more reliable and accessible. Carbin hopes that the performance gains promised by techniques such as loop perforation will give programmers an incentive to adopt formal verification techniques. "We're identifying all these opportunities where programmers can get much bigger benefits than they could before if they're willing to do a little verification," Carbin says. "If you have these large performance gains that just don't come about otherwise, then you can incentivize people to actually go about doing these things."

As its name might imply, loop perforation involves the computer routines known as loops, in which the same operation is performed over and over on a series of data; a perforated loop is one in which iterations of the operation are omitted at regular intervals. Like other techniques that trade accuracy for performance, Rinard explains, loop perforation works well with tasks that are, in the jargon of computer science, nondeterministic: They don't have to yield a single, inevitable answer. A few pixels in a frame of Internet video might be improperly decoded without the viewer's noticing; similarly, the precise order of the first three results of a Web search may not matter as much as returning the results in a timely fashion.

Drawing boundaries

With the researchers' framework, the programmer must specify "acceptability properties" for each procedure in a program. Those properties can be the types of things that formal verification typically ensures: that the result of a computation falls within a range of values, for instance, or that the output of a function adheres to a particular file format. But with the MIT framework, the programmer can also specify acceptability properties by reference to the normal execution of the program: The output of the modified procedure must be within 10 percent of the output of the unmodified procedure, say, or it must yield the same values, but not necessarily in the same order.

One advantage of the framework is that it allows developers who have already verified their programs to reuse much of their previous reasoning. In many cases, the programmer can define an acceptability property in such a way that, if it's met, the relaxed program will also preserve other properties that the programmer has verified.

In the framework described in the new paper, the programmer must also describe how the procedure is to be modified, whether through loop perforation or some other technique. But Carbin says the group is already working on a computer system that allows the programmer to simply state acceptability properties; the system then automatically determines which modifications preserve those properties, with what types of performance gains.

"This idea of relaxation — trading the traditional notion that a computer has to do every part of a computation exactly correctly for huge energy savings, or performance savings, or ease of implementation — is not a new idea," says Dan Grossman, an associate professor of computer science and engineering at the University of Washington, who also works on program relaxation. "But what this paper does is put in on a much firmer footing."

The paper also, Grossman says, shows "how you can formally verify software. But what it's doing by doing that is explaining what relaxed software actually is, what it means, what exactly it's relaxing."

EEE / System improves automated monitoring of security cameras
« on: May 09, 2018, 12:59:52 PM »
Police and security teams guarding airports, docks and border crossings from terrorist attack or illegal entry need to know immediately when someone enters a prohibited area, and who they are. A network of surveillance cameras is typically used to monitor these at-risk locations 24 hours a day, but these can generate too many images for human eyes to analyze.

Now a system being developed by Christopher Amato, a postdoc at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), can perform this analysis more accurately and in a fraction of the time it would take a human camera operator. “You can’t have a person staring at every single screen, and even if you did the person might not know exactly what to look for,” Amato says. “For example, a person is not going to be very good at searching through pages and pages of faces to try to match [an intruder] with a known criminal or terrorist.”

Existing computer vision systems designed to carry out this task automatically tend to be fairly slow, Amato says. “Sometimes it’s important to come up with an alarm immediately, even if you are not yet positive exactly what it is happening,” he says. “If something bad is going on, you want to know about it as soon as possible.”

So Amato and colleagues Komal Kapoor, Nisheeth Srivastava and Paul Schrater at the University of Minnesota are developing a system that uses mathematics to reach a compromise between accuracy — so the system does not trigger an alarm every time a cat walks in front of the camera, for example — with the speed needed to allow security staff to act on an intrusion as quickly as possible.

For camera-based surveillance systems, operators typically have a range of different computer vision algorithms they could use to analyze the video feed. These include skin detection algorithms that can identify a person in an image, or background detection systems that detect unusual objects, or when something is moving through the scene.

To decide which of these algorithms to use in a given situation, Amato’s system first carries out a learning phase, in which it assesses how each piece of software works in the type of setting in which it is being applied, such as an airport. To do this, it runs each of the algorithms on the scene, to determine how long it takes to perform an analysis, and how certain it is of the answer it comes up with. It then adds this information to its mathematical framework, known as a partially observable Markov decision process (POMDP).

Then, for any given situation — if it wants to know if an intruder has entered the scene, for example — the system can decide which of the available algorithms to run on the image, and in which sequence, to give it the most information in the least amount of time. “We plug all of the things we have learned into the POMDP framework, and it comes up with a policy that might tell you to start out with a skin analysis, for example, and then depending what you find out you might run an analysis to try to figure out who the person is, or use a tracking system to figure out where they are [in each frame],” Amato says. “And you continue doing this until the framework tells you to stop, essentially, when it is confident enough in its analysis to say there is a known terrorist here, for example, or that nothing is going on at all.”

Like a human detective, the system can also take context into account when analyzing a set of images, Amato says. So for instance, if the system is being used at an airport, it could be programmed to identify and track particular people of interest, and to recognize objects that are strange or in unusual locations, he says. It could also be programmed to sound an alarm whenever there are any objects or people in the scene, when there are too many objects, or if the objects are moving in ways that give cause for concern.

In addition to port and airport security, the system could monitor video information obtained by a fleet of unmanned aircraft, Amato says. It could also be used to analyze data from weather monitoring sensors to determine where tornados are likely to appear, or information from water samples taken by autonomous underwater vehicles, he says. The system would determine how to obtain the information it needs in the least amount of time and with the least possible number of sensors.

Matthijs Spaan, an artificial intelligence researcher at Delft University of Technology in the Netherlands, says the work demonstrates how artificial intelligence decision-making techniques can benefit data-intensive applications such as automated video surveillance. “Video processing has high computational demands, and the work shows how POMDPs can be applied to dynamically trade off computation cost with prediction accuracy,” he says. “The POMDP model excels at decision-making regarding uncertain information, in this case whether an intruder is present or not.”

Amato and his colleagues will present their system in a paper at the 24th IAAI Conference on Artificial Intelligence in Toronto in July.

EEE / Teaching self-assembling structures a new trick
« on: May 09, 2018, 12:59:13 PM »
Researchers at MIT have found a new way of making complex three-dimensional structures using self-assembling polymer materials that form tiny wires and junctions. The work has the potential to usher in a new generation of microchips and other devices made up of submicroscopic features.

Although similar self-assembling structures with very fine wires have been produced before, this is the first time the structures have been extended into three dimensions with different, independent configurations on different layers, the researchers say. The research is published this week in the journal Science.

Caroline Ross, the Toyota Professor of Materials Science and Engineering at MIT, says there has been “a lot of interest” among semiconductor researchers in finding ways to produce chip features that are much narrower than the wavelength of light — and hence narrower than what can be achieved using present light-based fabrication systems. Self-assembly based on polymers has been an active area of research, Ross says, but “what we did in this paper was push it into the third dimension.”

She and her colleagues began by creating an array of tiny posts on a substrate of silicon; they then coated the surface with materials called block copolymers, which have a natural tendency to assemble into long cylindrical structures. By carefully controlling the initial spacing of the posts, Ross explains, the researchers were able to set the spacing, angles, bends and junctions of the cylinders that form on the surface. What’s more, she says, “Each of the two layers of cylinders can be independently controlled using these posts,” making it possible to create complex 3-D configurations.

Amir Tavakkoli, a visiting graduate student from the National University of Singapore and lead author of the Science paper, says many researchers have tried to produce complex arrangements of nanoscale wires through self-assembly. But earlier attempts used complex processes with many steps, and had failed to control the resulting configurations well. The new system is simpler, Tavakkoli says, and “not only controlled the alignment of the wires, but showed we can even have sharp bends and junctions” at precisely determined locations.

“It wasn’t expected to be possible,” says MIT graduate student Kevin Gotrik. “It was a surprising result. We stumbled upon it, and then had to figure out how it works.”

There were a number of barriers to overcome in making the system practical, Gotrik says. For example, the posts fabricated on the surface are the key to controlling the whole self-assembly process, but they need to be quite a bit taller than they are wide, which could lead some to topple over; the MIT team ultimately found materials and shapes that would be stable. “We explored a wide range of conditions,” Gotrik says.

Graduate student Adam Hannon says the team used computer simulations of the structures in order to explore the effects of different post configurations on the double-layer 3-D structure. These simulations were compared with the most promising structures observed in the laboratory to get greater insight into how to control the resulting structures that formed.

So far, the MIT team has only produced two-layer configurations, but Alfredo Alexander-Katz, an assistant professor of materials science and engineering, says, “I think it would be feasible to go to three layers” while still maintaining full control over the arrangement of structures on each layer.

A key enabling technology was the MIT lab’s capability, using electron-beam lithography, to make 10-nanometer-wide cylindrical posts with precisely controlled positioning. These posts, in turn, guide the positioning of the self-assembling cylinders. Karl Berggren, an associate professor of electrical engineering, says it’s as if the lithography puts down an array of pillars, and these pillars then control the complex, multilevel routing of crisscrossing highways.

In earlier work, the MIT researchers had demonstrated that this self-assembly method could be used to create wires that are much finer than those that can be made by existing photolithography techniques for producing microchips — and thus help lead the way to next-generation devices that pack even more wires and transistors into a given area of silicon chip material. “In principle, this is scalable to quite small dimensions,” Ross says, far smaller than the 15-nanometer width of the cylinders produced so far — which is already less than half the width of the finest wires in existing microchips.

The basic technologies involved are compatible with existing manufacturing equipment in the semiconductor industry, the researchers say. But this is basic research that is probably still far from actual chip production, they caution. Within the next year the team hopes to use this methodology to produce a simple electronic device.

The technique is not limited to producing wires on a silicon chip, Ross and her colleagues say. The same method could be used to create 3-D arrays of other kinds of materials — such as proteins or DNA molecules, for example — in order to create biological detectors or drug-delivery systems.

Craig Hawker, a professor of chemistry and biochemistry at the University of California at Santa Barbara, says this is a “far-reaching finding,” which “goes a long way to fulfilling the demands of the International Technology Roadmap for Semiconductors, which calls for a robust, commercially viable nanopatterning technique.”

Hawker adds, “The robustness and power of this approach may also lead to applications outside lithography and microelectronics, with impact in water purification, membranes and organic photovoltaics.” He says this work is “a spectacular example of multidisciplinary work, with advances in chemistry, physics and nanotechnology seamlessly combined to address a critical technological and important societal problem.”

The work was supported by the Semiconductor Research Corporation, the FENA Center, the Nanoelectronics Research Initiative, the Singapore-MIT Alliance, the National Science Foundation, Tokyo Electron and Taiwan Semiconductor Manufacturing Company.

EEE / Sharing data links in networks of cars
« on: May 09, 2018, 12:58:22 PM »
Wi-Fi is coming to our cars. Ford Motor Co. has been equipping cars with Wi-Fi transmitters since 2010; according to an Agence France-Presse story last year, the company expects that by 2015, 80 percent of the cars it sells in North America will have Wi-Fi built in. The same article cites a host of other manufacturers worldwide that either offer Wi-Fi in some high-end vehicles or belong to standards organizations that are trying to develop recommendations for automotive Wi-Fi.

Two Wi-Fi-equipped cars sitting at a stoplight could exchange information free of charge, but if they wanted to send that information to the Internet, they’d probably have to use a paid service such as the cell network or a satellite system. At the ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, taking place this month in Portugal, researchers from MIT, Georgetown University and the National University of Singapore (NUS) will present a new algorithm that would allow Wi-Fi-connected cars to share their Internet connections. “In this setting, we’re assuming that Wi-Fi is cheap, but 3G is expensive,” says Alejandro Cornejo, a graduate student in electrical engineering and computer science at MIT and lead author on the paper.

The general approach behind the algorithm is to aggregate data from hundreds of cars in just a small handful, which then upload it to the Internet. The problem, of course, is that the layout of a network of cars is constantly changing in unpredictable ways. Ideally, the aggregators would be those cars that come into contact with the largest number of other cars, but they can’t be identified in advance.

Cornejo, Georgetown’s Calvin Newport and NUS’s Seth Gilbert — all three of whom did or are doing their doctoral work in Nancy Lynch’s group at MIT’s Computer Science and Artificial Intelligence Laboratory — began by considering the case in which every car in a fleet of cars will reliably come into contact with some fraction — say, 1/x — of the rest of the fleet in a fixed period of time. In the researchers’ scheme, when two cars draw within range of each other, only one of them conveys data to the other; the selection of transmitter and receiver is random. “We flip a coin for it,” Cornejo says.

Over time, however, “we bias the coin toss,” Cornejo explains. “Cars that have already aggregated a lot will start ‘winning’ more and more, and you get this chain reaction. The more people you meet, the more likely it is that people will feed their data to you.” The shift in probabilities is calculated relative to 1/x — the fraction of the fleet that any one car will meet.

The smaller the value of x, the smaller the number of cars required to aggregate the data from the rest of the fleet. But for realistic assumptions about urban traffic patterns, Cornejo says, 1,000 cars could see their data aggregated by only about five.

Realistically, it’s not a safe assumption that every car will come in contact with a consistent fraction of the others: A given car might end up collecting some other cars’ data and then disappearing into a private garage. But the researchers were able to show that, if the network of cars can be envisioned as a series of dense clusters with only sparse connections between them, the algorithm will still work well.

Weirdly, however, the researchers’ mathematical analysis shows that if the network is a series of dense clusters with slightly more connections between them, aggregation is impossible. “There’s this paradox of connectivity where if you have these isolated clusters, which are well-connected, then we can guarantee that there will be aggregation in the clusters,” Cornejo says. “But if the clusters are well connected, but they’re not isolated, then we can show that it’s impossible to aggregate. It’s not only our algorithm that fails; you can’t do it.”

“In general, the ability to have cheap computers and cheap sensors means that we can generate a huge amount of data about our environment,” says John Heidemann, a research professor at the University of Southern California’s Information Sciences Institute. “Unfortunately, what’s not cheap is communications.”

Heidemann says that the real advantage of aggregation is that it enables the removal of redundancies in data collected by different sources, so that transmitting the data requires less bandwidth. Although Heidemann’s research focuses on sensor networks, he suspects that networks of vehicles could partake of those advantages as well. “If you were trying to analyze vehicle traffic, there’s probably 10,000 cars on the Los Angeles Freeway that know that there’s a traffic jam. You don’t need every one of them to tell you that,” he says.

EEE / Research update: Chips with self-assembling rectangles
« on: May 09, 2018, 12:57:37 PM »
Researchers at MIT have developed a new approach to creating the complex array of wires and connections on microchips, using a system of self-assembling polymers. The work could eventually lead to a way of making more densely packed components on memory chips and other devices.

The new method — developed by MIT visiting doctoral student Amir Tavakkoli of the National University of Singapore, along with two other graduate students and three professors in MIT’s departments of Electrical Engineering and Computer Science (EECS) and Materials Science and Engineering (DMSE) — is described in a paper to be published this August in the journal Advanced Materials; the paper is available online now.

The process is closely related to a method the same team described last month in a paper in Science, which makes it possible to produce three-dimensional configurations of wires and connections using a similar system of self-assembling polymers.

In the new paper, the researchers describe a system for producing arrays of wires that meet at right angles, forming squares and rectangles. While these shapes are the basis for most microchip circuit layouts, they are quite difficult to produce through self-assembly. When molecules self-assemble, explains Caroline Ross, the Toyota Professor of Materials Science and Engineering and a co-author of the papers, they have a natural tendency to create hexagonal shapes — as in a honeycomb or an array of soap bubbles between sheets of glass.

For example, an array of tiny ball bearings in a box “tends to give a hexagonal symmetry, even though it’s in a square box,” Ross says. “But that’s not what circuit designers want. They want patterns with 90-degree angles” — so overcoming that natural tendency was essential to producing a useful self-assembling system, she says.

The team’s solution creates an array of tiny posts on the surface that guides the patterning of the self-assembling polymer molecules. This turns out to have other advantages as well: In addition to producing perfect square and rectangular patterns of tiny polymer wires, the system also enables the creation of a variety of shapes of the material itself, including cylinders, spheres, ellipsoids and double cylinders. “You can generate this astounding array of features,” Ross says, “with a very simple template.”

Karl Berggren, an associate professor of electrical engineering at MIT and a co-author of the paper, explains that these complex shapes are possible because “the template, which is coated so as to repel one of the polymer components, causes a lot of local strain on the pattern. The polymer then twists and turns to try to avoid this strain, and in so doing rearranges on the surface. So we can defeat the polymer’s natural inclinations, and make it create much more interesting patterns.”

This system can also produce features, such as arrays of holes in the material, whose spacing is much closer than what can be achieved using conventional chip-making methods. That means it can produce much more closely packed features on the chip than today’s methods can create — an important step in the ongoing efforts to pack more and more electronic components onto a given microchip.

“This new technique can produce multiple [shapes or patterns] simultaneously,” Tavakkoli says. It can also make “complex patterns, which is an objective for nanodevice fabrication,” with fewer steps than current processes. Fabricating a large area of complex circuitry on a chip using electron-beam lithography “could take several months,” he says. By contrast, using the self-assembling polymer method would take only a few days.

That’s still far too long for manufacturing a commercial product, but Ross explains that this step needs to be done only once to create a master pattern, which can then be used to stamp a coating on other chips in a very rapid fabrication process.

The technique could extend beyond microchip fabrication as well, Ross says. For example, one approach to the quest to pack ever-greater amounts of data onto magnetic media such as computer hard disks is to use a magnetic coating with a very fine pattern stamped into it, precisely defining the areas where each bit of data is to be stored. Such fine patterning could potentially be created using this self-assembly method, she says, and then stamped onto the disks.

Craig Hawker, a professor of chemistry and biochemistry at the University of California at Santa Barbara who was not involved in this work, says, “There is a growing need and requirement for industry to find an alternative to traditional photolithography for the fabrication of cutting-edge microelectronic devices. This work represents a pivotal achievement in this area and clearly demonstrates that structures once considered impossible to achieve by a self-assembly strategy can now be prepared with a high degree of fidelity."

Tavakkoli and Ross’ colleagues in this work are DMSE doctoral students Adam Hannon and Kevin Gotrik, DMSE professor Alfredo Alexander-Katz and EECS professor Karl Berggren. The research, which included work at MIT’s Nanostructures Laboratory and Scanning-Electron-Beam Lithography facility, was funded by the Semiconductor Research Corporation, the Center on Functional Engineered Nano Architectonics, the National Resources Institute, the Singapore-MIT Alliance, the National Science Foundation, the Taiwan Semiconductor Manufacturing Company and Tokyo Electron.

Researchers at MIT and the University of Central Florida (UCF) have developed a versatile new fabrication technique for making large quantities of uniform spheres from a wide variety of materials — a technique that enables unprecedented control over the design of individual, microscopic particles. The particles, including complex, patterned spheres, could find uses in everything from biomedical research and drug delivery to electronics and materials processing.

The method is an outgrowth of a technique for making long, thin fibers out of multiple materials, developed over the last several years at MIT by members of the same team. The new work, reported this week in the journal Nature, begins by making thin fibers using this earlier method, but then adds an extra step of heating the fibers to create a line of tiny spheres — like a string of pearls — within these fibers.

Conventional fabrication of microscopic spherical particles uses a “bottom-up” approach, growing the spheres from even tinier “seeds” — an approach that is only capable of producing very tiny particles. This new “top-down” method, however, can produce spheres as small as 20 nanometers (about the size of the smallest known viruses) or as large as two millimeters (about the size of a pinhead), meaning the biggest particles are 100,000 times larger than the smallest ones. But for a given batch, the size of the spheres produced can be extremely uniform — much more so than is possible with the bottom-up approach.

Yoel Fink, a professor of materials science and director of MIT’s Research Laboratory of Electronics, whose group developed the earlier method of producing multimaterial fibers, explains that the new method can also produce multimaterial spheres consisting of different layers or segments. Even more complex structures are possible, he says, offering unprecedented control over particle architecture and composition.

The most likely short-term uses of the new process would be for biomedical applications, says Ayman Abouraddy, a former postdoc in Fink’s lab who is now an assistant professor at UCF’s College of Optics and Photonics. “Typical applications of nanoparticles today are for controlled drug delivery,” he says. But with this new process, two or more different drugs — even ones that are ordinarily incompatible — could be combined inside individual particles, and released only once they’ve reached their intended destination in the body.

More exotic possibilities could arise later, Abouraddy adds, including new “metamaterials” with advanced optical properties that were previously unattainable.

The basic process involves creating a large polymer cylinder, called a “preform,” containing an internal semiconductor cylinder core that is an exact scaled-up model of the final fiber structure; this preform is then heated until it is soft enough to be pulled into a thin fiber, like taffy. The internal structure of the fiber, made of materials that all soften at the same temperature, retains the internal configuration of the original cylinder.

The fiber is then heated further so that the semiconducting core forms a liquid, producing a series of discrete spherical droplets within the otherwise continuous fiber. This same phenomenon causes a diminishing stream of water from a faucet to eventually break up into a stream of droplets, famously captured by MIT’s Harold “Doc” Edgerton in his stroboscopic images.

Abouraddy says that during a visit to ancient temples in his native Egypt, he found an inscription showing that even long ago, people were aware of this degradation of a stream of water into droplets — caused by a process now known as Rayleigh instability.

In the new fabrication process developed by Abouraddy and Fink’s team, these droplets “freeze” in place as the fiber solidifies; the preform’s polymer sheath then keeps them locked in place until it is later dissolved away. This overcomes another problem with traditional production of nanoparticles: their tendency to clump together.

In principle, Abouraddy says, the discovery of this process for forming particles could have come many years ago. But even after theorists had predicted that such instabilities could form in the process of drawing fibers, the new discovery came by accident: Joshua Kaufman, a student of Abouraddy’s, was trying to produce fibers, but his experiment “failed” when the fiber kept breaking up into droplets.

Abouraddy, who knew about the theoretical possibility, immediately recognized that this “failure” was actually an important discovery — one that had eluded previous attempts simply because the process requires a precise combination of timing, temperature and materials. Kaufman is the lead author of the Nature paper.

“The ability to harness and control the fleeting fluid instability within a fiber has profound implications for future devices,” Fink says, and could lead to a wide variety of uses. While the group has demonstrated the production of six-segment “beach ball” particles, in principle much more complex structures, made of a variety of materials, should also be possible, he says. Any material that could be drawn into a fiber could now, in principle, be made into a small particle.

John Ballato, director of the Center for Optical Materials Science and Engineering Technologies at Clemson University, who was not involved in the new work, says that this fiber-draw production method is “a major step forward for the scale-up of designed nanoparticles with controlled chemistry and shape.” In addition to its potentially useful applications, Ballato says, this method “also yields new scientific insights” into Rayleigh instability, “a phenomenon that has been well studied, as anyone with a leaky faucet knows all too well.”

Malvin Teich, a professor emeritus of electrical and computer engineering at Boston University and Columbia University, says the approach used by these researchers “is novel and inventive, and it promises significant applications ranging from the fabrication of unique metamaterials to the delivery of drugs.” He adds, “The work is particularly significant because the approach is scalable, and because of the superior uniformity of the sizes and shapes of the particles produced, compared to what can be achieved using other techniques.”

The work was supported by the National Science Foundation, the Air Force Office of Scientific Research and the Army Research Office through MIT’s Institute for Soldier Nanotechnologies.

EEE / Arm Transfers Microprocessor Technology to Chinese JV
« on: May 09, 2018, 12:54:41 PM »
LONDON — With China looking to bolster its own semiconductor industry and reduce dependence on foreign technology, microprocessor IP firm Arm Holdings confirmed last week that its separate local joint venture entity, Arm mini China, started operations to license its technology locally in China.

This move effectively completes a process of technology transfer to its Chinese operation enabling local chip developers to license its technology directly in China.

The U.K.-headquartered parent Arm had already announced intentions to establish the joint venture in China in February 2017, following the launch of the HOPU-Arm innovation fund. Backed by investments from a leading Chinese sovereign wealth fund and Chinese investment institutions and companies, it stated at the time its aim to invest in emerging technology companies and startups in China to accelerate development of applications in IoT, autonomous vehicles, cloud computing, big data, and artificial intelligence (AI).

Now, according to reports last week in the Nikkei Asian Review and the Chinese online site The Paper, Arm mini China has officially been registered in Shenzhen, China, with 51% being Chinese-owned (with investors including Bank of China and Baidu) and the remaining 49% owned by Arm. The reports also state that Arm mini China plans to IPO with a listing in China, which could be rapidly approved by the Chinese regulator.

EE Times asked for an interview with a senior executive at Arm to elaborate on the development, but the company’s spokesperson was only able to respond with a formal statement saying that it doesn’t respond to press speculation but confirming the JV. The statement said that Arm chip shipments by Chinese customers have grown by more than 110 fold over the past 10 years.

To make Arm technology available to more companies within China, the company needed a Chinese partner to develop Arm-compliant technology that could be locall licensed in the Chinese market, the statement said.

"Chinese organizations prefer to acquire technologies that have been fully developed by Chinese companies with this JV’s establishment, which will enable Arm-based semiconductor intellectual property to be tailored for the Chinese domestic ecosystem and makes a broader portfolio of technology accessible to Chinese partners for China market needs," the statement read. 

Some in the electronics industry commented privately last week that the move wasn’t necessary, as the company could have continued as it has been doing so for many years in China. With its microprocessor intellectual property already being licensed to both Chinese and global chip manufacturers, the purpose of the separate entity is mainly addressing new Chinese chip developers and OEMs, as well as local innovation using its IP. It is thought that the parent company would also maintain exclusive rights to new IP developed in China for international markets.

The coating is a specially engraved, nanostructured thin film that allows more light through than a flat surface, yet also provides electrical access to the underlying material -- a crucial combination for optoelectronics, devices that convert electricity to light or vice versa. The researchers, led by U. of I. electrical and computer engineering professor Daniel Wasserman, published their findings in the journal Advanced Materials.

"The ability to improve both electrical and optical access to a material is an important step towards higher-efficiency optoelectronic devices," said Wasserman, a member of the Micro and Nano Technology Laboratory at Illinois.

At the interface between two materials, such as a semiconductor and air, some light is always reflected, Wasserman said. This limits the efficiency of optoelectronic devices. If light is emitted in a semiconductor, some fraction of this light will never escape the semiconductor material. Alternatively, for a sensor or solar cell, some fraction of light will never make it to the detector to be collected and turned into an electrical signal. Researchers use a model called Fresnel's equations to describe the reflection and transmission at the interface between two materials.

"It has been long known that structuring the surface of a material can increase light transmission," said study co-author Viktor Podolskiy, a professor at the University of Massachusetts at Lowell. "Among such structures, one of the more interesting is similar to structures found in nature, and is referred to as a 'moth-eye' pattern: tiny nanopillars which can 'beat' the Fresnel equations at certain wavelengths and angles."

Although such patterned surfaces aid in light transmission, they hinder electrical transmission, creating a barrier to the underlying electrical material.

"In most cases, the addition of a conducting material to the surface results in absorption and reflection, both of which will degrade device performance," Wasserman said.

The Illinois and Massachusetts team used a patented method of metal-assisted chemical etching, MacEtch, developed at Illinois by Xiuling Li, U. of I. professor of electrical and computer engineering and co-author of the new paper. The researchers used MacEtch to engrave a patterned metal film into a semiconductor to create an array of tiny nanopillars rising above the metal film. The combination of these "moth-eye" nanopillars and the metal film created a partially coated material that outperformed the untreated semiconductor.

"The nanopillars enhance the optical transmission while the metal film offers electrical contact. Remarkably, we can improve our optical transmission and electrical access simultaneously," said Runyu Liu, a graduate researcher at Illinois and a co-lead author of the work along with Illinois graduate researcher Xiang Zhao and Massachusetts graduate researcher Christopher Roberts.

The researchers demonstrated that their technique, which results in metal covering roughly half of the surface, can transmit about 90 percent of light to or from the surface. For comparison, the bare, unpatterned surface with no metal can only transmit 70 percent of the light and has no electrical contact.

The researchers also demonstrated their ability to tune the material's optical properties by adjusting the metal film's dimensions and how deeply it etches into the semiconductor.

"We are looking to integrate these nanostructured films with optoelectronic devices to demonstrate that we can simultaneously improve both the optical and electronic properties of devices operating at wavelengths from the visible all the way to the far infrared," Wasserman said.

EEE / New thin film transistor may lead to flexible devices
« on: May 09, 2018, 12:51:35 PM »
Their findings, published in the science journal Nature Communications, could open the door to the development of flexible electronic devices with applications as wide-ranging as display technology to medical imaging and renewable energy production.

The team was exploring new uses for thin film transistors (TFT), which are most commonly found in low-power, low-frequency devices like the display screen you're reading from now. Efforts by researchers and the consumer electronics industry to improve the performance of the transistors have been slowed by the challenges of developing new materials or slowly improving existing ones for use in traditional thin film transistor architecture, known technically as the metal oxide semiconductor field effect transistor (MOSFET).

But the U of A electrical engineering team did a run-around on the problem. Instead of developing new materials, the researchers improved performance by designing a new transistor architecture that takes advantage of a bipolar action. In other words, instead of using one type of charge carrier, as most thin film transistors do, it uses electrons and the absence of electrons (referred to as "holes") to contribute to electrical output. Their first breakthrough was forming an 'inversion' hole layer in a 'wide-bandgap' semiconductor, which has been a great challenge in the solid-state electronics field.

Once this was achieved, "we were able to construct a unique combination of semiconductor and insulating layers that allowed us to inject "holes" at the MOS interface," said Gem Shoute, a PhD student in the Department of Electrical and Computer Engineering who is lead author on the article. Adding holes at the interface increased the chances of an electron "tunneling" across a dielectric barrier. Through this phenomenon, a type of quantum tunnelling, "we were finally able to achieve a transistor that behaves like a bipolar transistor."

"It's actually the best performing [TFT] device of its kind--ever," said materials engineering professor Ken Cadien, a co-author on the paper. "This kind of device is normally limited by the non-crystalline nature of the material that they are made of"

The dimension of the device itself can be scaled with ease in order to improve performance and keep up with the need of miniaturization, an advantage that modern TFTs lack. The transistor has power-handling capabilities at least 10 times greater than commercially produced thin film transistors.

Electrical engineering professor Doug Barlage, who is Shoute's PhD supervisor and one of the paper's lead authors, says his group was determined to try new approaches and break new ground. He says the team knew it could produce a high-power thin film transistor--it was just a matter of finding out how.

"Our goal was to make a thin film transistor with the highest power handling and switching speed possible. Not many people want to look into that, but the raw properties of the film indicated dramatic performance increase was within reach," he said. "The high quality sub 30 nanometre (a human hair is 50 nanometres wide) layers of materials produced by Professor Cadien's group enabled us to successfully try these difficult concepts"

In the end, the team took advantage of the very phenomena other researchers considered roadblocks.

"Usually tunnelling current is considered a bad thing in MOSFETs and it contributes to unnecessary loss of power, which manifests as heat," explained Shoute. "What we've done is build a transistor that considers tunnelling current a benefit."

The team has filed a provisional patent on the transistor. Shoute says the next step is to put the transistor to work "in a fully flexible medium and apply these devices to areas like biomedical imaging, or renewable energy."

It's well known that photons, or units of light, are faster than electrons and could, therefore, process information faster from smaller chip structures. A switch designed in collaboration with researchers from ETH Zürich, the University of Washington and Virginia Commonwealth University bypasses a tendency for the unwanted absorption of light when using so-called surface plasmons, or light coupled to oscillations of free electron clouds, to help confine light to a nanoscale.

"The big idea behind this is going from electronic circuitry to photonic circuitry," said Vladimir Shalaev, Purdue's Bob and Anne Burnett Distinguished Professor of Electrical and Computer Engineering. "From electronics to photonics, you need some structures that confine light to be put into very small areas. And plasmonics seems to be the solution."

Even though plasmonics downsizes light, photons also get lost, or absorbed, rather than transferred to other parts of the computer chip when they interact with plasmons.

In a study publishing April 26 in Nature, researchers addressed this problem through the development of a switch, called a ring modulator, that uses resonance to control whether light couples with plasmons. When on, or out of resonance, light travels through silicon waveguides to other parts of the chip. When off, or in resonance, light couples with plasmons and is absorbed.

"When you have a purely plasmonic device, light can be lossy, but in this case it's a gain for us because it reduces a signal when necessary," said Soham Saha, a graduate research assistant in Purdue's school of electrical and computer engineering. "The idea is to select when you want loss and when you don't."

The loss creates a contrast between on and off states, thus better enabling control over the direction of light where appropriate for processing bits of information. A plasmon-assisted ring modulator also results in a smaller "footprint" because plasmons enable confinement of light down to nanoscale chip structures, Shalaev said.

Purdue researchers plan to make this modulator fully compatible with complementary metal-oxide-semiconductor transistors, paving the way to truly hybrid photonic and electronic nanocircuitry for computer chips.

"Supercomputers already contain both electronic and optical components to do massive calculations very fast," said Alexandra Boltasseva, Purdue professor of electrical and computer engineering, whose lab specializes in plasmonic materials. "What we're working on would fit very well into this hybrid model, so we don't have to wait to use it when computer chips go all-optical."

Development of the plasmon assisted electro-optic modulator required expertise in not only plasmonics, but also integrated circuitry and nanophotonics from the leading group of Juerg Leuthold at ETH Zürich -- including Christian Haffner and other group members -- and in opto-electronic switching materials from Larry Dalton's group at the University of Washington. Haffner and Nathaniel Kinsey, former Purdue student and now a professor of electrical and computer engineering at Virginia Commonwealth University, along with Leuthold, Shalaev and Boltasseva, conceived the idea of a low-loss plasmon assisted electro-optic modulator for subwavelength optical devices, including compact on-chip sensing and communications technologies.

EEE / Light-powered healing of a wearable electrical conductor
« on: May 09, 2018, 12:50:27 PM »
Mechanical failure along a conductive pathway can cause the unexpected shutdown of electronic devices, ultimately limiting device lifetimes. In particular, wearable electronic devices, which inevitably undergo dynamic and vigorous motions (e.g., bending, folding, or twisting), are much more liable to suffer from such conductive failures compared with conventional flat electronic devices. To address this problem, various systems to realize healable electrical conductors have been proposed; however, rapid, noninvasive, and on-demand healing, factors that are all synergistically required, especially for wearable device applications, still remains challenging to realize.

Professor Jung-Ki Park and Hee-Tak Kim in the Department of Chemical & Biomolecular Engineering at the Korea Advanced Institute of Science and Technology (KAIST) have come up with the idea of a light-powered healable electrical conductor. Light-powered healing is implemented via the use of a photochromic soft material (i.e., an azobenzene material), which can be directionally moved along the light polarization. This unique directionality of the material's movement with respect to light polarization enables an efficient healing process, regardless of crack propagation directions, light incident angles, and the number of cracks.

By depositing silver nanowires (AgNWs), which are the conducting material used in this study, onto the top layer of the flexible photochromic soft material, this optically healable material has fully functional electrical conductivity. Notably, AgNWs are found to maintain conformable contact with the photochromic soft material, even during the optical healing process. Thus, AgNWs and the photochromic soft material act as conductive pathways and a light-powered cargo carrier, respectively; the synergetic effect detailed from combining these various advantages provides rapid, noninvasive, and on-demand healing for a flexible electronic conductor, making light-powered healing more amenable to dynamically deformable wearable devices beyond existing systems.

EEE / Transistors that can switch between two stable energy states
« on: May 09, 2018, 12:49:50 PM »
Modern computers are limited by a delay formed as electrons travel through the tiny wires and switches on a computer chip. To overcome this electronic backlog, engineers would like to develop a computer that transmits information using light, in addition to electricity, because light travels faster than electricity.

Having two stable energy states, or bistability, within a transistor allows the device to form an optical-electric switch. That switch will work as the primary building block for development of optical logic -- the language needed for future optical computer processors to communicate, said Milton Feng the Nick Holonyak Jr. Emeritus Chair in electrical and computer engineering and the team lead in a recent study.

"Building a transistor with electrical and optical bistability into a computer chip will significantly increase processing speeds," Feng said, "because the devices can communicate without the interference that occurs when limited to electron-only transistors."

In the latest study, the researchers describe how optical and electrical bistable outputs are constructed from a single transistor. The addition of an optical element creates a feedback loop using a process called electron tunneling that controls the transmission of light. The team published its results in the Journal of Applied Physics.

Feng said the obvious solution to solving the bottleneck formed by big data transfer -- eliminating the electronic data transmission of the transistor and use all optics -- is unlikely to happen.

"You cannot remove electronics entirely because you need to plug into a current and convert that into light," Feng said. "That's the problem with the all-optical computer concept some people talk about. It just is not possible because there is no such thing as an all-optical system."

Feng and Holonyak, the Bardeen Emeritus Chair in electrical and computer engineering and physics, in 2004 discovered that light -- previously considered to be a byproduct of transistor electronics -- could be harnessed as an optical signal. This paved the way for the development of the transistor laser, which uses light and electrons to transmit a signal.

The new transistor could enable new devices and applications that have not been possible with traditional transistor technology.

"This is a single device that provides bistability for both electrical and optical functions with one switch," Feng said. "It is totally new, and we are working hard to find more new applications for the device."

Feng and his team have demonstrated electro-optical bistability at -50 degrees Celsius. The next step will be to prove that the device can work at room temperature. Feng said that they recently achieved this milestone, and the details will be published in an upcoming report.

"Any electronic device is virtually useless if it can't operate at room temperature," Feng said. "Nobody wants to carry a device in a refrigerator to keep it from getting too hot!"

Pages: 1 2 [3] 4 5 6