Instant-start computers possible with new breakthrough

Автор работы: Пользователь скрыл имя, 02 Марта 2015 в 22:23, реферат

Краткое описание

If data could be encoded without current, it would require much less energy and make things like low-power, instant-on computing a ubiquitous reality. Scientists have made a breakthrough in that direction with a room-temperature magnetoelectric memory device. Equivalent to one computer bit, it exhibits the holy grail of next-generation nonvolatile memory: magnetic switchability, in two steps, with nothing but an electric field.

Прикрепленные файлы: 1 файл

артиклс.docx

— 387.75 Кб (Скачать документ)

Instant-start computers possible with new breakthrough

Source:

Cornell University

Abstract:

If data could be encoded without current, it would require much less energy and make things like low-power, instant-on computing a ubiquitous reality. Scientists have made a breakthrough in that direction with a room-temperature magnetoelectric memory device. Equivalent to one computer bit, it exhibits the holy grail of next-generation nonvolatile memory: magnetic switchability, in two steps, with nothing but an electric field.

Article:

To encode data, today's computer memory technology uses electric currents -- a major limiting factor for reliability and shrinkability, and the source of significant power consumption. If data could instead be encoded without current -- for example, by an electric field applied across an insulator -- it would require much less energy, and make things like low-power, instant-on computing a ubiquitous reality.

A team at Cornell University led by postdoctoral associate John Heron, who works jointly with Darrell Schlom, professor of Industrial Chemistry in the Department of Materials Science and Engineering, and Dan Ralph, professor of Physics in the College of Arts and Sciences, has made a breakthrough in that direction with a room-temperature magnetoelectric memory device. Equivalent to one computer bit, it exhibits the holy grail of next-generation nonvolatile memory: magnetic switchability, in two steps, with nothing but an electric field. Their results were published online Dec. 17 in Nature, along with an associated "News and Views" article.

"The advantage here is low energy consumption," Heron said. "It requires a low voltage, without current, to switch it. Devices that use currents consume more energy and dissipate a significant amount of that energy in the form of heat. That is what's heating up your computer and draining your batteries."

The researchers made their device out of a compound called bismuth ferrite, a favorite among materials mavens for a spectacularly rare trait: It's both magnetic -- like a fridge magnet, it has its own, permanent local magnetic field -- and also ferroelectric, meaning it's always electrically polarized, and that polarization can be switched by applying an electric field. Such so-called ferroic materials are typically one or the other, rarely both, as the mechanisms that drive the two phenomena usually fight each other.

This combination makes it a "multiferroic" material, a class of compounds that has enjoyed a buzz over the last decade or so. Paper co-author Ramamoorthy Ramesh, Heron's Ph.D. adviser at University of California, Berkeley, first showed in 2003 that bismuth ferrite can be grown as extremely thin films and can exhibit enhanced properties compared to bulk counterparts, igniting its relevance for next-generation electronics.

Because it's multiferroic, bismuth ferrite can be used for nonvolatile memory devices with relatively simple geometries. The best part is it works at room temperature; other scientists, including Schlom's group, have demonstrated similar results with competing materials, but at unimaginably cold temperatures, like 4 Kelvin (-452 Fahrenheit) -- not exactly primed for industry. "The physics has been exciting, but the practicality has been absent," Schlom said.

A key breakthrough by this team was theorizing, and experimentally realizing, the kinetics of the switching in the bismuth ferrite device. They found that the switching happens in two distinct steps. One-step switching wouldn't have worked, and for that reason theorists had previously thought what they have achieved was impossible, Schlom said. But since the switching occurs in two steps, bismuth ferrite is technologically relevant.

The multiferroic device also seems to require an order of magnitude lower energy than its chief competitor, a phenomenon called spin transfer torque, which Ralph also studies, and that harnesses different physics for magnetic switching. Spin transfer torque is already used commercially but in only limited applications. They have some work to do; for one thing they made just a single device, and computer memory involves billions of arrays of such devices. They need to ramp up its durability, too. But for now, proving the concept is a major leap in the right direction.

 

Lead islands in a sea of graphene magnetize the material of the future

Source:

Plataforma SINC

Abstract:

Researchers have discovered that if lead atoms are intercalated on a graphene sheet, a powerful magnetic field is generated by the interaction of the electrons' spin with their orbital movement. This property could have implications in spintronics, an emerging technology to create advanced computational systems. Graphene is considered the material of the future due to its extraordinary optical and electronic mechanical properties, especially because it conducts electrons very quickly. However, it does not have magnetic properties, and thus no method has been found to manipulate these electrons or any of their properties to use it in new magnetoelectronic devices.

Article:

In the sea of graphene (over an iridium crystal), electrons' spin-orbit interaction is much lower than that created by intercalating a Pb island.

 

Researchers in Spain have discovered that if lead atoms are intercalated on a graphene sheet, a powerful magnetic field is generated by the interaction of the electrons' spin with their orbital movement. This property could have implications in spintronics, an emerging technology promoted by the European Union to create advanced computational systems.

Graphene is considered the material of the future due to its extraordinary optical and electronic mechanical properties, especially because it conducts electrons very quickly. However, it does not have magnetic properties, and thus no method has been found to manipulate these electrons or any of their properties to use it in new magnetoelectronic devices, although Spanish scientists have come upon a key.

Researchers from IMDEA Nanoscience, the Autonomous University of Madrid, the Madrid Institute of Materials Science (CSIC) and the University of the Basque Country describe in the journal Nature Physicsthis week how to create a powerful magnetic field using this new material.

The secret is to intercalate atoms or Pb islands below the sea of hexagons of carbon that make up graphene. This produces an enormous interaction between two electron characteristics: their spin -- a small 'magnet' linked to their rotation -- and their orbit, the movement they follow around the nucleus.

"This spin-orbit interaction is a million times more intense than that inherent to graphene, which is why we obtain revolutions that could have important uses, for example in data storage," explains Rodolfo Miranda, Director of IMDEA Nanoscience and head of the study.

To obtain this effect, the scientists laid a layer of lead on another of graphene, in turn grown over an iridium crystal. In this configuration the lead forms 'islands' below the graphene and the electrons of this two-dimensional material behave as if in the presence of a colossal 80-tesla magnetic field, which facilitates the selective control of the flow of spins.

Traffic control with two lanes

"And, what is most important, under these conditions certain electronic states are topologically protected; in other words, they are immune to defects, impurities or geometric disturbances," continues Miranda, who gives this example: "If we compare it to traffic, in a traditional spintronic material cars circulate along a single-lane road, which make collisions more likely, whilst with this new material we have traffic control with two spatially separate lanes, preventing crashes."

Spintronics is a new technology that uses electrons' magnetic spin to store information bits. It arose with the discovery of giant magnetoresistance, a finding which won Peter Grümberg and Albert Fert the Nobel Prize in Physics in 2007. It is an effect that causes great changes to the electric resistance of fine multi-layer materials and has led to the development of components as varied as the reader heads on hard disks or the sensors in airbags.

The first generation of spintronic or magnetoresistant devices was based on the effect magnetic materials have on electron spin. But a second generation is already up and running, and encompasses this new study, in which electrons' own spin-orbit interaction acts on them as if there were a real external magnetic field, even if there is not.

The use of graphene as an active component in spintronics is one of the fundamental aims of the large European Union project 'Graphene Flagship'. The scientists' final objective is to willfully control the type of spin the electrons in this new material have in order to apply it to the electronic devices of the future.

 

Revealing the quantum geometry of the graphene lattice

Source:

Max Planck Institute of Quantum Optics

Abstract:

Among the most revolutionary concepts of modern physics is that the laws of nature are inherently non-local. One striking manifestation of this non-locality was famously predicted by Aharonov and Bohm: a magnetic field confined to the interior of a solenoid can alter the behavior of electrons outside it, shifting the phase of their wave-like interference although they never directly encounter the magnetic field. Originally regarded as a mere curiosity, such "geometric phase shifts" are now known to have dramatic consequences for electron transport in solid-state materials, e.g., allowing unimpeded current flow along the edges of a material that is insulating in the bulk.

Article:

Honeycomb lattice structure created by three intersecting laser beams (arrows). The atoms are loaded into the minima (blue) of this potential landscape and can quantum-mechanically tunnel between them. One possible position is illustrated by the small sphere.


Among the most revolutionary concepts of modern physics is that the laws of nature are inherently non-local. One striking manifestation of this non-locality was famously predicted by Aharonov and Bohm: a magnetic field confined to the interior of a solenoid can alter the behavior of electrons outside it, shifting the phase of their wave-like interference although they never directly encounter the magnetic field. Originally regarded as a mere curiosity, such "geometric phase shifts" are now known to have dramatic consequences for electron transport in solid-state materials, e.g., allowing unimpeded current flow along the edges of a material that is insulating in the bulk. In suitable crystalline structures, geometric phase shifts can arise even in the absence of any magnetic flux, instead induced by an elusive property known as "Berry flux" in momentum space that is difficult to measure directly.

Now, scientists at the Ludwig-Maximilians-Universität Munich, the Max Planck Institute of Quantum Optics and Stanford University have demonstrated a matter-wave interferometer that precisely measures Berry flux in an artificial crystal formed by a standing wave of light. Their method, reported this week in Science Expressmay ultimately enable new approaches to quantum computation exploiting non-local, topological properties of matter for robust encoding of quantum information.

In the classic Aharonov-Bohm experiment, a beam of electrons is sent through an interferometer, wherein trajectories passing either left or right of a solenoid converge on a distant screen and form an interference pattern that shifts in proportion to the magnetic flux in the solenoid. The LMU/MPQ team led by Dr. Ulrich Schneider and Prof. Immanuel Bloch formed an analogous interferometer not with electrons but with neutral atoms, cooled to ultra-low temperature where the wave-like properties of matter are easiest to observe. Lacking electric charge, the atoms experience no geometric phase shift in a true magnetic field and are thus ideally suited to sensing only the surrogate Berry flux. One of the simplest crystalline structures where Berry flux is known to play a role in solid-state physics is the honeycomb lattice of graphene. To form a graphene-like lattice potential for ultracold atoms, the researchers use three intersecting laser beams to generate a pattern of light that confines the atoms to a hexagonal grid formed by the intensity minima.

The most iconic feature in the band structure of graphene is the conical intersection of energy levels known as a Dirac cone, which plays the role of the solenoid in the Berry-flux interferometer. Such conical intersections appear in various contexts ranging from solid-state physics to the energy spectra of molecules and can be shown theoretically to generate a half-quantum of Berry flux -- i.e., a geometric phase shift of π in a trajectory that encircles the Dirac cone. Besides having a precisely known value, this flux is expected to be localized to the single point in momentum space where the energy levels intersect, acting as an infinitely narrow solenoid. The Dirac cone thus constitutes an ideal test case for the accuracy and resolution of a Berry flux detector. The atom interferometer passes this test with flying colors, revealing precisely the expected π phase shift and constraining the location of the responsible flux to a millionth of the lattice's unit cell in momentum space.

The interferometric measurement of Berry flux with ultracold atoms is an important milestone in a broad effort to advance control and understanding of the topologyof many-particle quantum systems. Driving this effort is the insight that geometric phases can be highly robust to local perturbations that are ordinarily the bane of efforts at storing quantum information. In the case of the Aharonov-Bohm interferometer, this robustness is evidenced by the fact that the measured phase is insensitive to the precise position of the solenoid (or the Dirac cone). Only if the localized flux is moved from inside to outside the interferometer arms -- altering the topology of the system -- does the measured flux suddenly change. Realizing visions of topological quantum computation will require significantly more complex topological features that might be engineered by combining effects of geometric phases with control over interparticle interactions. While many challenges still lie ahead, Berry flux interferometry may be celebrated as a new tool for characterizing -- and perhaps one day harnessing -- topology in ultra-cold atom systems. 

 

Origin of long-standing space mystery revealed: Origin of the 'theta aurora'

 

Source:

University of Southampton

Article:

The night side of the terrestrial magnetosphere forms a structured magnetotail, consisting of a plasma sheet at low latitudes that is sandwiched between two regions called the magnetotail lobes. The lobes consist of the regions in which Earth's magnetic field lines are directly connected to the magnetic field carried by the solar wind. Different plasma populations are observed in these regions -- plasma in the lobes is very cool, whereas the plasma sheet is more energetic. The diagram labels by two red dots the location of an ESA Cluster satellite and NASA's IMAGE satellite on 15 September 2005, when particular conditions of the magnetic field configuration gave rise to a phenomenon known as 'theta aurora.'


 

Auroras are the most visible manifestation of the Sun's effect on Earth. They are seen as colourful displays in the night sky, known as the Northern or Southern Lights. They are caused by the solar wind, a stream of plasma -- electrically charged atomic particles - carrying its own magnetic field, interacting with Earth's magnetic field.

Normally, the main region for this impressive display is the 'auroral oval', which lies at around 65-70 degrees north or south of the equator, encircling the polar caps.

However, auroras can occur at even higher latitudes. One type is known as a 'theta aurora' because seen from above it looks like the Greek letter theta - an oval with a line crossing through the centre.

While the cause of the auroral oval emissions is reasonably well understood, the origin of the theta aurora was unclear until now.

Researchers observed particles in the two 'lobe' regions of the magnetosphere. The plasma in the lobes is normally cold, but previous observations suggested that theta auroras are linked with unusually hot lobe plasma.

Dr Robert Fear from the University of Southampton (formerly at the University of Leicester, where much of the research took place), and lead author of the paper published in Science this week, says: "Previously it was unclear whether this hot plasma was a result of direct solar wind entry through the lobes of the magnetosphere, or if the plasma is somehow related to the plasma sheet on the night side of Earth.

"One idea is that the process of magnetic reconnection on the night side of Earth causes a build-up of 'trapped' hot plasma in the higher latitude lobes."

The mystery was finally solved by studying data collected simultaneously by the European Space Agency's (ESA) Cluster and NASA's IMAGE satellites on 15 September 2005. While the four Cluster satellites were located in the southern hemisphere magnetic lobe, IMAGE had a wide-field view of the southern hemisphere aurora. As one Cluster satellite observed uncharacteristically energetic plasma in the lobe, IMAGE saw the 'arc' of the theta aurora cross the magnetic footprint of Cluster.

"We found that the energetic plasma signatures occur on high-latitude magnetic field lines that have been 'closed' by the process of magnetic reconnection, which then causes the plasma to become relatively hot," says Dr Fear.

"Because the field lines are closed, the observations are incompatible with direct entry from the solar wind. By testing this and other predictions about the behaviour of the theta aurora, our observations provide strong evidence that the plasma trapping mechanism is responsible for the theta aurora," he adds.

"The study highlights the intriguing process that can occur in the magnetosphere when the interplanetary magnetic field of the solar wind points northwards," adds Philippe Escoubet, ESA's Cluster project scientist.

"This is the first time that the origin of the theta aurora phenomenon has been revealed, and it is thanks to localised measurements from Cluster combined with the wide-field view of IMAGE that we can better understand another aspect of the Sun-Earth connection," he adds.

 

Referencing:

Scientists have solved a long-standing space mystery - the origin of the 'theta aurora'. Auroras are the most visible manifestation of the Sun's effect on Earth. They are seen as colorful displays in the night sky, known as the Northern or Southern Lights. They are caused by the solar wind, a stream of plasma -- electrically charged atomic particles -- carrying its own magnetic field, interacting with the earth's magnetic field. Normally, the main region for this impressive display is the 'auroral oval', which lies at around 65-70 degrees north or south of the equator, encircling the polar caps. However, auroras can occur at even higher latitudes. One type is known as a 'theta aurora' because seen from above it looks like the Greek letter theta -- an oval with a line crossing through the center.

 

In one aspect of vision, computers catch up to primate brain

 

Source:

Massachusetts Institute of Technology

Article:

 

A team of MIT neuroscientists has found that some computer programs can identify the objects in these images just as well as the primate brain.

 

For decades, neuroscientists have been trying to design computer networks that can mimic visual skills such as recognizing objects, which the human brain does very accurately and quickly.

Until now, no computer model has been able to match the primate brain at visual object recognition during a brief glance. However, a new study from MIT neuroscientists has found that one of the latest generation of these so-called "deep neural networks" matches the primate brain.

Because these networks are based on neuroscientists' current understanding of how the brain performs object recognition, the success of the latest networks suggest that neuroscientists have a fairly accurate grasp of how object recognition works, says James DiCarlo, a professor of neuroscience and head of MIT's Department of Brain and Cognitive Sciences and the senior author of a paper describing the study in the Dec. 11 issue of the journal PLoS Computational Biology.

"The fact that the models predict the neural responses and the distances of objects in neural population space shows that these models encapsulate our current best understanding as to what is going on in this previously mysterious portion of the brain," says DiCarlo, who is also a member of MIT's McGovern Institute for Brain Research.

This improved understanding of how the primate brain works could lead to better artificial intelligence and, someday, new ways to repair visual dysfunction, adds Charles Cadieu, a postdoc at the McGovern Institute and the paper's lead author.

Other authors are graduate students Ha Hong and Diego Ardila, research scientist Daniel Yamins, former MIT graduate student Nicolas Pinto, former MIT undergraduate Ethan Solomon, and research affiliate Najib Majaj.

Inspired by the brain

Scientists began building neural networks in the 1970s in hopes of mimicking the brain's ability to process visual information, recognize speech, and understand language.

For vision-based neural networks, scientists were inspired by the hierarchical representation of visual information in the brain. As visual input flows from the retina into primary visual cortex and then inferotemporal (IT) cortex, it is processed at each level and becomes more specific until objects can be identified.

To mimic this, neural network designers create several layers of computation in their models. Each level performs a mathematical operation, such as a linear dot product. At each level, the representations of the visual object become more and more complex, and unneeded information, such as an object's location or movement, is cast aside.

"Each individual element is typically a very simple mathematical expression," Cadieu says. "But when you combine thousands and millions of these things together, you get very complicated transformations from the raw signals into representations that are very good for object recognition."

For this study, the researchers first measured the brain's object recognition ability. Led by Hong and Majaj, they implanted arrays of electrodes in the IT cortex as well as in area V4, a part of the visual system that feeds into the IT cortex. This allowed them to see the neural representation -- the population of neurons that respond -- for every object that the animals looked at.

The researchers could then compare this with representations created by the deep neural networks, which consist of a matrix of numbers produced by each computational element in the system. Each image produces a different array of numbers. The accuracy of the model is determined by whether it groups similar objects into similar clusters within the representation.

"Through each of these computational transformations, through each of these layers of networks, certain objects or images get closer together, while others get further apart," Cadieu says.

The best network was one that was developed by researchers at New York University, which classified objects as well as the macaque brain.

More processing power

Two major factors account for the recent success of this type of neural network, Cadieu says. One is a significant leap in the availability of computational processing power. Researchers have been taking advantage of graphical processing units (GPUs), which are small chips designed for high performance in processing the huge amount of visual content needed for video games. "That is allowing people to push the envelope in terms of computation by buying these relatively inexpensive graphics cards," Cadieu says.

The second factor is that researchers now have access to large datasets to feed the algorithms to "train" them. These datasets contain millions of images, and each one is annotated by humans with different levels of identification. For example, a photo of a dog would be labeled as animal, canine, domesticated dog, and the breed of dog.

At first, neural networks are not good at identifying these images, but as they see more and more images, and find out when they were wrong, they refine their calculations until they become much more accurate at identifying objects.

Cadieu says that researchers don't know much about what exactly allows these networks to distinguish different objects.

"That's a pro and a con," he says. "It's very good in that we don't have to really know what the things are that distinguish those objects. But the big con is that it's very hard to inspect those networks, to look inside and see what they really did. Now that people can see that these things are working well, they'll work more to understand what's happening inside of them."

Информация о работе Instant-start computers possible with new breakthrough