NPL worked with BAE Systems Advanced Technology Centre, to measure the pattern and efficiency of radiation emitted from next generation wearable antennas embedded in T-shirts.
Wearable antennas could be the future of wireless technology and have important applications in communications, security and healthcare, but as they are worn on the body it is particularly important to understand their performance. The human body absorbs electromagnetic signals and so there are concerns that the emitted signal from the antenna could suffer from power losses if worn too close.
The research tested a number of novel measurement techniques that could help the development of this exciting new technology, including measuring the radiation absorbed by a 'human dummy', designed from material that mimics the characteristics of human tissue.
Dr James Matthews, Principal Engineer, BAE Systems Advanced Technology Centre, said:
"NPL provided an excellent quality set of measurements, despite the difficulties inherent in wearable antenna technology. NPL took time to understand the requirements and took a proactive approach to the challenge, providing wider ranging measurements than originally anticipated."
NPL facilities used during the research included the Reverberation Chamber, Fully Anechoic Small Antenna Radiated Testing (SMART) Range, the EMC Ferrite Lined Fully Anechoic Room (FAR) and specific absorption rate (SAR) facilities.
The collaborative research revealed that there is an optimum distance for the position of the antenna in relation to the body, which can improve the antenna's efficiency. This information, when integrated into antenna design, will help developers produce better products.
Provided by National Physical Laboratory
"Testing the T-shirt antenna." June 30th, 2011. http://www.physorg.com/news/2011-06-t-shirt-antenna.html
quinta-feira, 30 de junho de 2011
terça-feira, 28 de junho de 2011
NEC claims spintronics memory breakthrough
By Jack Clark, 13 June, 2011 13:13
NEC and Tohaku University claim to have developed the world's first content addressable memory system that uses spintronics to store data even when there is no standby power.Spintronics is a technology for controlling not only individual electronics, but their spin as well, increasing the amount of information that can be stored per electron. The technology promises to increase the efficiency with which devices consume power.
Because data is stored in the spin of an electron, rather than its presence, information is encoded without moving the electron at a transistor gate level, which allows less power to be used in the encoding process.
The system that NEC and Tohaku University have developed uses magnets to maintain "the same high operation speed and non-volatile operation as existing circuits when processing and storing data on a circuit while power is off", NEC wrote in a statement on Monday.
The electrons retain their spin orientation when no longer exposed to the magnetic field, so data can be stored with no power running through the system. The magnetic field runs through the domain wall elements of the logic, so that data is stored within the content addressable memory (CAM) circuit rather than a separate memory pool.
By using spintronics in the CAM system the researchers claim to have halved the circuit area in comparison with existing technologies and cut power consumption.
However, NEC and Tohaku University did not disclose the temperature at which the CAM system operated. Room temperature spintronics is seen as highly desirable, as it increases the potential range of consumer applications. Traditionally, most experiments are done at close to absolute zero so as to reduce the amount of factors that can disturb electron spins. For spintronics to be worked into consumer hardware it will need to be achievable at normal temperatures or the cost will balloon.
Copyright © 1998-2011 CBS Interactive Limited. All rights reserved
New material promises faster electronics
The novel material graphene makes faster electronics possible. Scientists at the Faculty of Electrical Engineering and Information Technology at the Vienna University of Technology (TU Vienna) developed light-detectors made of graphene and analyzed their astonishing properties.
High hopes are pinned on this new material: Graphene, a honeycomb-like carbon structure, made of only one layer of atoms, exhibits remarkable properties. In 2010, the Nobel Prize was awarded for the discovery of graphene and its behavior. At the Photonics Institute at the TU Vienna, the electronic and optical properties of graphene are the focus of interest. Viennese scientists could now demonstrate how remarkably fast graphene converts light pulses into electrical signals. This could considerably improve date exchange between computers.
Converting Light into Electrical Signals
When data is transmitted by light pulses (for instance in fiber optic cables) the pulses have to be converted back into electrical signals, which can be processed by a computer. This conversion of light into electrical current is possible due to the photoelectric effect, which was originally explained by Albert Einstein. In certain materials, light can cause electrons to leave their positions and travel through the material freely, whereby electrical current occurs. “Light detectors which convert light into electronic signals have been around for a long time. But when they are made of graphene, they react faster than most other materials could”, Alexander Urich explains. He investigated the optical and electronic properties of graphene together with Thomas Müller and Professor Karl Unterrainer at TU Vienna.
Analysis using Ultra Short Laser Pulses
The scientists had already shown last year that graphene can convert light into electronic signals with remarkable speed. However, the reaction time of the material could not be determined – the photoelectric effect in graphene is so fast that it just cannot be measured by the usual measuring methods. But now, sophisticated technological tricks could shed some light on the properties of graphene. At TU Vienna, laser pulses were fired at the graphene photo-detector in quick succession, and the resulting photo-current was measured. If the time delay between the laser pulses is changed, the detector’s maximum frequency can be determined. “Using this method we could show that our detectors can be used up to a frequency of 262 GHz”, Thomas Müller (TU Vienna) says. This corresponds to a theoretical upper bound for data transfer using graphene photo-detectors of more than 30 gigabytes per second. It has yet to be determined to what extent this is technically feasible, but this result clearly shows the remarkable capability of graphene and its potential for optoelectronic applications.
Fast Signals for Fast Electronics
The main reason for the fact that graphene-photodetectors can operate at such high frequencies is the short life-span of the charge carriers in graphene. The electrons which are removed from their fixed position and contribute to the electrical current settle down at another fixed position after a few picoseconds (millionths of a billionth of a second, 10-12 seconds). As soon as this happens, the graphene photodetector is ready for another light signal which frees new electrons, creating the next electrical signal.
The fast reaction time of graphene is one more item on the list of remarkable properties of this material. In graphene, charge carriers can travel extremely far without being disturbed. It can absorb light in a huge spectral range, from infrared to visible light – unlike standard semiconductors, which can only absorb a small part of the spectrum. In addition to this, graphene can conduct heat extremely well and has an exceptionally high breaking strength.
Provided by Vienna University of Technology
"New material promises faster electronics." June 28th, 2011. http://www.physorg.com/news/2011-06-material-faster-electronics.html
High hopes are pinned on this new material: Graphene, a honeycomb-like carbon structure, made of only one layer of atoms, exhibits remarkable properties. In 2010, the Nobel Prize was awarded for the discovery of graphene and its behavior. At the Photonics Institute at the TU Vienna, the electronic and optical properties of graphene are the focus of interest. Viennese scientists could now demonstrate how remarkably fast graphene converts light pulses into electrical signals. This could considerably improve date exchange between computers.
Converting Light into Electrical Signals
When data is transmitted by light pulses (for instance in fiber optic cables) the pulses have to be converted back into electrical signals, which can be processed by a computer. This conversion of light into electrical current is possible due to the photoelectric effect, which was originally explained by Albert Einstein. In certain materials, light can cause electrons to leave their positions and travel through the material freely, whereby electrical current occurs. “Light detectors which convert light into electronic signals have been around for a long time. But when they are made of graphene, they react faster than most other materials could”, Alexander Urich explains. He investigated the optical and electronic properties of graphene together with Thomas Müller and Professor Karl Unterrainer at TU Vienna.
Analysis using Ultra Short Laser Pulses
The scientists had already shown last year that graphene can convert light into electronic signals with remarkable speed. However, the reaction time of the material could not be determined – the photoelectric effect in graphene is so fast that it just cannot be measured by the usual measuring methods. But now, sophisticated technological tricks could shed some light on the properties of graphene. At TU Vienna, laser pulses were fired at the graphene photo-detector in quick succession, and the resulting photo-current was measured. If the time delay between the laser pulses is changed, the detector’s maximum frequency can be determined. “Using this method we could show that our detectors can be used up to a frequency of 262 GHz”, Thomas Müller (TU Vienna) says. This corresponds to a theoretical upper bound for data transfer using graphene photo-detectors of more than 30 gigabytes per second. It has yet to be determined to what extent this is technically feasible, but this result clearly shows the remarkable capability of graphene and its potential for optoelectronic applications.
Fast Signals for Fast Electronics
The main reason for the fact that graphene-photodetectors can operate at such high frequencies is the short life-span of the charge carriers in graphene. The electrons which are removed from their fixed position and contribute to the electrical current settle down at another fixed position after a few picoseconds (millionths of a billionth of a second, 10-12 seconds). As soon as this happens, the graphene photodetector is ready for another light signal which frees new electrons, creating the next electrical signal.
The fast reaction time of graphene is one more item on the list of remarkable properties of this material. In graphene, charge carriers can travel extremely far without being disturbed. It can absorb light in a huge spectral range, from infrared to visible light – unlike standard semiconductors, which can only absorb a small part of the spectrum. In addition to this, graphene can conduct heat extremely well and has an exceptionally high breaking strength.
Provided by Vienna University of Technology
"New material promises faster electronics." June 28th, 2011. http://www.physorg.com/news/2011-06-material-faster-electronics.html
segunda-feira, 27 de junho de 2011
Subatomic quantum memory in diamond demonstrated
Physicists working at the University of California, Santa Barbara and the University of Konstanz in Germany have developed a breakthrough in the use of diamond in quantum physics, marking an important step toward quantum computing. The results are reported in this week's online edition of Nature Physics.
The physicists were able to coax the fragile quantum information contained within a single electron in diamond to move into an adjacent single nitrogen nucleus, and then back again using on-chip wiring.
"This ability is potentially useful to create an atomic-scale memory element in a quantum computer based on diamond, since the subatomic nuclear states are more isolated from destructive interactions with the outside world," said David Awschalom, senior author. Awschalom is director of UCSB's Center for Spintronics & Quantum Computation, professor of physics, electrical and computer engineering, and the Peter J. Clarke director of the California NanoSystems Institute.
Awschalom said the discovery shows the high-fidelity operation of a quantum mechanical gate at the atomic level, enabling the transfer of full quantum information to and from one electron spin and a single nuclear spin at room temperature. The process is scalable, and opens the door to new solid-state quantum device development.
Scientists have recently shown that it is possible to synthesize thousands of these single electron states with beams of nitrogen atoms, intentionally creating defects to trap the single electrons. "What makes this demonstration particularly exciting is that a nitrogen atom is a part of the defect itself, meaning that these sub-atomic memory elements automatically scale with the number of logical bits in the quantum computer," said lead author Greg Fuchs, a postdoctoral fellow at UCSB.
Rather than using logical elements like transistors to manipulate digital states like "0" or "1," a quantum computer needs logical elements capable of manipulating quantum states that may be "0" and "1" at the same time. Even at ambient temperature, these defects in diamond can do exactly that, and have recently become a leading candidate to form a quantum version of a transistor.
However, there are still major challenges to building a diamond-based quantum computer. One of these is finding a method to store quantum information in a scalable way. Unlike a conventional computer, where the memory and the processor are in two different physical locations, in this case they are integrated together, bit-for-bit.
"We knew that the nitrogen nuclear spin would be a good choice for a scalable quantum memory –– it was already there," said Fuchs. "The hard part was to transfer the state quickly, before it is lost to decoherence."
Awschalom explained: "A key breakthrough was to use a unique property of quantum physics –– that two quantum objects can, under special conditions, become mixed to form a new composite object." By mixing the quantum spin state of the electrons in the defect with the spin state of the nitrogen nucleus for a brief time –– less than 100 billionths of a second –– information that was originally encoded in the electrons is passed to the nucleus.
"The result is an extremely fast transfer of the quantum information to the long-lived nuclear spin, which could further enhance our capabilities to correct for errors during a quantum computation," said co-author Guido Burkard, a theoretical physicist at the University of Konstanz, who developed a model to understand the storage process.
Provided by University of California - Santa Barbara
"Subatomic quantum memory in diamond demonstrated." June 27th, 2011. http://www.physorg.com/news/2011-06-subatomic-quantum-memory-diamond.html
The physicists were able to coax the fragile quantum information contained within a single electron in diamond to move into an adjacent single nitrogen nucleus, and then back again using on-chip wiring.
"This ability is potentially useful to create an atomic-scale memory element in a quantum computer based on diamond, since the subatomic nuclear states are more isolated from destructive interactions with the outside world," said David Awschalom, senior author. Awschalom is director of UCSB's Center for Spintronics & Quantum Computation, professor of physics, electrical and computer engineering, and the Peter J. Clarke director of the California NanoSystems Institute.
Awschalom said the discovery shows the high-fidelity operation of a quantum mechanical gate at the atomic level, enabling the transfer of full quantum information to and from one electron spin and a single nuclear spin at room temperature. The process is scalable, and opens the door to new solid-state quantum device development.
Scientists have recently shown that it is possible to synthesize thousands of these single electron states with beams of nitrogen atoms, intentionally creating defects to trap the single electrons. "What makes this demonstration particularly exciting is that a nitrogen atom is a part of the defect itself, meaning that these sub-atomic memory elements automatically scale with the number of logical bits in the quantum computer," said lead author Greg Fuchs, a postdoctoral fellow at UCSB.
Rather than using logical elements like transistors to manipulate digital states like "0" or "1," a quantum computer needs logical elements capable of manipulating quantum states that may be "0" and "1" at the same time. Even at ambient temperature, these defects in diamond can do exactly that, and have recently become a leading candidate to form a quantum version of a transistor.
However, there are still major challenges to building a diamond-based quantum computer. One of these is finding a method to store quantum information in a scalable way. Unlike a conventional computer, where the memory and the processor are in two different physical locations, in this case they are integrated together, bit-for-bit.
"We knew that the nitrogen nuclear spin would be a good choice for a scalable quantum memory –– it was already there," said Fuchs. "The hard part was to transfer the state quickly, before it is lost to decoherence."
Awschalom explained: "A key breakthrough was to use a unique property of quantum physics –– that two quantum objects can, under special conditions, become mixed to form a new composite object." By mixing the quantum spin state of the electrons in the defect with the spin state of the nitrogen nucleus for a brief time –– less than 100 billionths of a second –– information that was originally encoded in the electrons is passed to the nucleus.
"The result is an extremely fast transfer of the quantum information to the long-lived nuclear spin, which could further enhance our capabilities to correct for errors during a quantum computation," said co-author Guido Burkard, a theoretical physicist at the University of Konstanz, who developed a model to understand the storage process.
Provided by University of California - Santa Barbara
"Subatomic quantum memory in diamond demonstrated." June 27th, 2011. http://www.physorg.com/news/2011-06-subatomic-quantum-memory-diamond.html
Topic of the moment: Particle detection
With CERN’s Large Hadron Collider having reached a data milestone and amid reports of the possible discovery of previously unknown particles by the Tevatron, we take a look at the physics of particle detection.
In June, the total data accumulated by the Large Hadron Collider at CERN reached one “inverse femtobarn” of data, equivalent to 70 trillion particle collisions. Meanwhile, the head of the Tevatron particle accelerator in the US appointed a committee to evaluate the evidence for whether a completely new particle has been discovered.
But how does particle detection work?
All particle detectors, even the early bubble chambers which used ionisation and vapour trails to detect charged particles, work by capturing data from particle collisions. But the volume of data being produced by modern high-energy experiments means that detection methods much more sophisticated than the photography used with the bubble chambers are needed.
In the LHC
The Large Hadron Collider has four main particle detectors: ALICE, ATLAS, CMS and LHCb. These detectors are showered with the particles produced when the two beams of protons circulating around the LHC collide.
Each layer of a detector has a specific function, but the main components are semiconductor detectors which measure charged particles and calorimeters which measure particle energy.
The semiconductor detectors use materials such as silicon to create diodes – components that conduct electrical current in only one direction. Charged particles passing though the large number of strips of silicon placed around the proton beam collision point create a current that can be tracked and measured.
The position where particles originate is also important – if one appears deep within the detector it is likely to have been produced by the decay of another particle produced earlier, possibly within the collision itself.
Calorimeters are calibrated to be either electromagnetic or hadronic. The former will detect particles such as electrons and photons, while the latter will pick up protons and neutrons. Particles entering a calorimeter are absorbed and a particle shower is created by cascades of interactions.
It is the energy deposited into the calorimeter from these interactions that is measured. Stacking calorimeters allows physicists to build a complete picture of the direction in which the particles travelled as well as the energy deposited, or determine the shape of the particle shower produced.
ATLAS is also designed to detect muons, particles much like electrons but 200 times more massive, which pass right through the other detection equipment. A muon spectrometer surrounds the calorimeter, and functions in a similar way to the inner silicon detector.
Analysing the data
With so many particle collisions, accelerators such as the LHC generate a huge amount of data, which take a great deal of computing power to capture and analyse. To achieve this, CERN developed the LHC Computing Grid.
The Grid is a tiered network in which data are first processed by the computers at CERN, then sent on to regional sites for further processing, and finally sent to institutions all around the world to be analysed.
The results of experiments can be compared with those predicted by current theories of particle physics to look for any differences. At the LHC, particle physicists are hoping that the collisions will produce a Higgs boson, the as-yet-theoretical particle thought to be responsible for the existence of mass.
At the Tevatron, where they collide protons and antiprotons, the data that one team believed showed an entirely new type of particle appeared as a bump in a graph of experimental data that was not present in the theoretical predictions.
However this was only seen in data from one of the accelerator’s detectors and wasn’t present in the other against which it was compared. It is now believed to have been a phantom signal.
Physicists will have to keep waiting for the first sight of the Higgs – or other new particles.
text: Chris White
In June, the total data accumulated by the Large Hadron Collider at CERN reached one “inverse femtobarn” of data, equivalent to 70 trillion particle collisions. Meanwhile, the head of the Tevatron particle accelerator in the US appointed a committee to evaluate the evidence for whether a completely new particle has been discovered.
But how does particle detection work?
All particle detectors, even the early bubble chambers which used ionisation and vapour trails to detect charged particles, work by capturing data from particle collisions. But the volume of data being produced by modern high-energy experiments means that detection methods much more sophisticated than the photography used with the bubble chambers are needed.
In the LHC
The Large Hadron Collider has four main particle detectors: ALICE, ATLAS, CMS and LHCb. These detectors are showered with the particles produced when the two beams of protons circulating around the LHC collide.
Each layer of a detector has a specific function, but the main components are semiconductor detectors which measure charged particles and calorimeters which measure particle energy.
The semiconductor detectors use materials such as silicon to create diodes – components that conduct electrical current in only one direction. Charged particles passing though the large number of strips of silicon placed around the proton beam collision point create a current that can be tracked and measured.
The position where particles originate is also important – if one appears deep within the detector it is likely to have been produced by the decay of another particle produced earlier, possibly within the collision itself.
Calorimeters are calibrated to be either electromagnetic or hadronic. The former will detect particles such as electrons and photons, while the latter will pick up protons and neutrons. Particles entering a calorimeter are absorbed and a particle shower is created by cascades of interactions.
It is the energy deposited into the calorimeter from these interactions that is measured. Stacking calorimeters allows physicists to build a complete picture of the direction in which the particles travelled as well as the energy deposited, or determine the shape of the particle shower produced.
ATLAS is also designed to detect muons, particles much like electrons but 200 times more massive, which pass right through the other detection equipment. A muon spectrometer surrounds the calorimeter, and functions in a similar way to the inner silicon detector.
Analysing the data
With so many particle collisions, accelerators such as the LHC generate a huge amount of data, which take a great deal of computing power to capture and analyse. To achieve this, CERN developed the LHC Computing Grid.
The Grid is a tiered network in which data are first processed by the computers at CERN, then sent on to regional sites for further processing, and finally sent to institutions all around the world to be analysed.
The results of experiments can be compared with those predicted by current theories of particle physics to look for any differences. At the LHC, particle physicists are hoping that the collisions will produce a Higgs boson, the as-yet-theoretical particle thought to be responsible for the existence of mass.
At the Tevatron, where they collide protons and antiprotons, the data that one team believed showed an entirely new type of particle appeared as a bump in a graph of experimental data that was not present in the theoretical predictions.
However this was only seen in data from one of the accelerator’s detectors and wasn’t present in the other against which it was compared. It is now believed to have been a phantom signal.
Physicists will have to keep waiting for the first sight of the Higgs – or other new particles.
text: Chris White
quarta-feira, 22 de junho de 2011
Batteries that Recharge in Seconds
A new process could let your laptop and cell phone recharge a hundred times faster than they do now.
By Katherine Bourzac
A new way of making battery electrodes based on nanostructured metal foams has been used to make a lithium-ion battery that can be 90 percent charged in two minutes. If the method can be commercialized, it could lead to laptops that charge in a few minutes or cell phones that charge in 30 seconds.
The methods used to make the ultrafast-charging electrodes are compatible with a range of battery chemistries; the researchers have also used them to make nickel-metal-hydride batteries, the kind commonly used in hybrid and electric vehicles.
How fast a battery can charge up and then release that power is primarily limited by the movement of electrons and ions into and out of the cathode, the electrode that is negative during recharging. Researchers have been trying to use nanostructured materials to improve the process, but there's usually a trade-off between total energy storage capacity (which determines how long a battery can run before needing a recharge) and charge rates. "People solved half the problem," says Paul Braun, professor of materials science and engineering at the University of Illinois at Urbana-Champaign.
Braun's group has made highly porous metal foams coated with a large amount of active battery materials. The metal provides high electrical conductivity, and even though it's porous, the structure holds enough active material to store a sufficient amount of energy. The pores allow for ions to move about unimpeded.
The first step in making the cathodes is to create a slurry of polymer spheres on the surface of a conductive substrate. Because of their shape and surface charge, the spheres self-assemble into a regular pattern. The Illinois researchers then use a common technique called electroplating to fill the space between the spheres with nickel. Next, they dissolve the polymer spheres, and most of the metal, to leave a nickel sponge that's about 90 percent open space. Finally, they grow the active material on top of the sponge.
"It's some distance to a product, but we have pretty good lab demos" with nickel-metal-hydride and lithium-ion batteries, says Braun. The Illinois group has made lithium-ion batteries that charge almost entirely in about two minutes. The method should be applicable to the cell sizes needed for laptops and electric cars, though the researchers have not made them yet.
"The performance they got is unprecedented," says Andreas Stein, a professor of chemistry at the University of Minnesota. Stein pioneered the polymer-particle templating method that Braun's group used. Braun's work is described in the journal Nature Nanotechnology.
Jeff Dahn, professor of physics at Dalhousie University, is skeptical that these electrodes will ever end up in products. "When you look at the flow chart for making this structure, it's pretty complicated, and that is going to be expensive," he says.
Braun acknowledges: "There are lots of people coming up with elegant [electrode] structures, but manufacturing them is tricky." He says, however, that his fabrication process combines existing methods that are currently widely used to make other products, if not to make batteries, and that it shouldn't be too difficult to adapt them. The process would add extra steps to making a battery, but these steps aren't particularly expensive or complex, Braun says.
Braun's group will next test the electrode structure with a wider range of battery chemistries and work on improving batteries' other half, the anode—a trickier project.
Copyright Technology Review 2011.
The methods used to make the ultrafast-charging electrodes are compatible with a range of battery chemistries; the researchers have also used them to make nickel-metal-hydride batteries, the kind commonly used in hybrid and electric vehicles.
How fast a battery can charge up and then release that power is primarily limited by the movement of electrons and ions into and out of the cathode, the electrode that is negative during recharging. Researchers have been trying to use nanostructured materials to improve the process, but there's usually a trade-off between total energy storage capacity (which determines how long a battery can run before needing a recharge) and charge rates. "People solved half the problem," says Paul Braun, professor of materials science and engineering at the University of Illinois at Urbana-Champaign.
Braun's group has made highly porous metal foams coated with a large amount of active battery materials. The metal provides high electrical conductivity, and even though it's porous, the structure holds enough active material to store a sufficient amount of energy. The pores allow for ions to move about unimpeded.
The first step in making the cathodes is to create a slurry of polymer spheres on the surface of a conductive substrate. Because of their shape and surface charge, the spheres self-assemble into a regular pattern. The Illinois researchers then use a common technique called electroplating to fill the space between the spheres with nickel. Next, they dissolve the polymer spheres, and most of the metal, to leave a nickel sponge that's about 90 percent open space. Finally, they grow the active material on top of the sponge.
"It's some distance to a product, but we have pretty good lab demos" with nickel-metal-hydride and lithium-ion batteries, says Braun. The Illinois group has made lithium-ion batteries that charge almost entirely in about two minutes. The method should be applicable to the cell sizes needed for laptops and electric cars, though the researchers have not made them yet.
"The performance they got is unprecedented," says Andreas Stein, a professor of chemistry at the University of Minnesota. Stein pioneered the polymer-particle templating method that Braun's group used. Braun's work is described in the journal Nature Nanotechnology.
Jeff Dahn, professor of physics at Dalhousie University, is skeptical that these electrodes will ever end up in products. "When you look at the flow chart for making this structure, it's pretty complicated, and that is going to be expensive," he says.
Braun acknowledges: "There are lots of people coming up with elegant [electrode] structures, but manufacturing them is tricky." He says, however, that his fabrication process combines existing methods that are currently widely used to make other products, if not to make batteries, and that it shouldn't be too difficult to adapt them. The process would add extra steps to making a battery, but these steps aren't particularly expensive or complex, Braun says.
Braun's group will next test the electrode structure with a wider range of battery chemistries and work on improving batteries' other half, the anode—a trickier project.
Patent for arrays of nanoscale electrical probes awarded to NJIT today
Reginald C. Farrow and Zafer Iqbal, research professors at NJIT, were awarded a patent today for an improved method of fabricating arrays of nanoscale electrical probes. Their discovery may lead to improved diagnostic tools for measuring the spatial variation of electrical activity inside biological cells.
US Patent 7,964,143 discloses a nanoprobe array technique that allows for an array of individual, vertically-oriented nanotubes to be assembled at precise locations on electrical contacts using electrophoresis. The location of each nanotube in the array is controlled by a nanoscale electrostatic lens fabricated by a process commonly used in the manufacture of integrated circuits.
The research appeared in 2008 in the the Journal of Vacuum Science and Technology, entitled "Directed self-assembly of individual vertically aligned carbon nanotubes." Support for the research was provided by the Department of Defense.
The number of nanotubes deposited at each location is controlled by the geometry of the lens, which makes it possible to deposit a single nanotube in a window much larger than its diameter. After deposition, each individual nanotube can be modified to insulate the shaft and to sensitize it to a specific ion in the cell. The task is accomplished by attaching an appropriate functional molecule or enzyme to the tip of the nanotube.
The completed nanoprobe array may be configured for multiple diverse electrochemical events to be mapped on timescales limited only by the nature of the nanotube's contact with the cell membrane and the speed of integrated circuits.
For three different types of cells (human embryonic kidney cells, mouse neurons, and yeast), the NJIT researchers have measured the electrical response to a signal. This signal is generated by a pair of carbon nanotube probes spaced only six micrometers apart. Yeast cells are too small to measure with the tools most commonly used in the industry for determining electrical response.
The researchers have also demonstrated depositing single wall carbon nanotubes on metal contacts in arrays of vias (windows in an insulator exposing the metal) spaced only 200 nanometers apart. They've also shown the ability to attach electrochemically different functional enzymes to vertically oriented single wall carbon nanotubes at different closely-spaced sites on the same chip.
Both today's patent and a companion patent awarded last year (7,736,979 ) teach a method for depositing a single nanotube vertically in an electronic circuit using techniques currently used in the fabrication of computer chips. This enables the bridging of electronic technology with biological sensing all the way down to the nanoscale.
Using the process called electrophoresis, the nanotubes in a liquid suspension are drawn to metal contacts at the base of precisely-located vias. Each via gets charged and acts like an electrostatic lens. Once the first nanotube is deposited the electric field is modified and can redirect other nanotubes from depositing on the metal, even though the via may have a diameter several times larger than the diameter of the nanotube element.
This discovery has led to the following patented and patent-pending technologies: a vertical transistor using a single one-nanometer carbon nanotube, a planar biofuel cell, and the nanoprobe array announced today.
Provided by New Jersey Institute of Technology
"Patent for arrays of nanoscale electrical probes awarded to NJIT today." June 21st, 2011. http://www.physorg.com/news/2011-06-patent-arrays-nanoscale-electrical-probes.html
US Patent 7,964,143 discloses a nanoprobe array technique that allows for an array of individual, vertically-oriented nanotubes to be assembled at precise locations on electrical contacts using electrophoresis. The location of each nanotube in the array is controlled by a nanoscale electrostatic lens fabricated by a process commonly used in the manufacture of integrated circuits.
The research appeared in 2008 in the the Journal of Vacuum Science and Technology, entitled "Directed self-assembly of individual vertically aligned carbon nanotubes." Support for the research was provided by the Department of Defense.
The number of nanotubes deposited at each location is controlled by the geometry of the lens, which makes it possible to deposit a single nanotube in a window much larger than its diameter. After deposition, each individual nanotube can be modified to insulate the shaft and to sensitize it to a specific ion in the cell. The task is accomplished by attaching an appropriate functional molecule or enzyme to the tip of the nanotube.
The completed nanoprobe array may be configured for multiple diverse electrochemical events to be mapped on timescales limited only by the nature of the nanotube's contact with the cell membrane and the speed of integrated circuits.
For three different types of cells (human embryonic kidney cells, mouse neurons, and yeast), the NJIT researchers have measured the electrical response to a signal. This signal is generated by a pair of carbon nanotube probes spaced only six micrometers apart. Yeast cells are too small to measure with the tools most commonly used in the industry for determining electrical response.
The researchers have also demonstrated depositing single wall carbon nanotubes on metal contacts in arrays of vias (windows in an insulator exposing the metal) spaced only 200 nanometers apart. They've also shown the ability to attach electrochemically different functional enzymes to vertically oriented single wall carbon nanotubes at different closely-spaced sites on the same chip.
Both today's patent and a companion patent awarded last year (7,736,979 ) teach a method for depositing a single nanotube vertically in an electronic circuit using techniques currently used in the fabrication of computer chips. This enables the bridging of electronic technology with biological sensing all the way down to the nanoscale.
Using the process called electrophoresis, the nanotubes in a liquid suspension are drawn to metal contacts at the base of precisely-located vias. Each via gets charged and acts like an electrostatic lens. Once the first nanotube is deposited the electric field is modified and can redirect other nanotubes from depositing on the metal, even though the via may have a diameter several times larger than the diameter of the nanotube element.
This discovery has led to the following patented and patent-pending technologies: a vertical transistor using a single one-nanometer carbon nanotube, a planar biofuel cell, and the nanoprobe array announced today.
Provided by New Jersey Institute of Technology
"Patent for arrays of nanoscale electrical probes awarded to NJIT today." June 21st, 2011. http://www.physorg.com/news/2011-06-patent-arrays-nanoscale-electrical-probes.html
terça-feira, 21 de junho de 2011
Quantum eavesdropper steals quantum keys
(PhysOrg.com) -- In quantum cryptography, scientists use quantum mechanical effects to encrypt and then communicate confidential information. Although quantum cryptography codes are unbreakable in principle, even the best techniques have loopholes in practice that scientists are trying to address. In a recent study, physicists have exposed one of these loopholes by hacking a quantum code, which involved copying a secret quantum key without being detected.
The researchers, Ilja Gerhardt, et al., from the National University of Singapore and the University of Trondheim, have published their study in a recent issue of Nature Communications. Although this is not the first experiment to show that quantum cryptography systems are vulnerable and that a quantum key can be secretly copied, it is the first time that someone has actually copied a quantum key.
“This confirms that non-idealities in physical implementations of QKD [quantum key distribution] can be fully practically exploitable, and must be given increased scrutiny if quantum cryptography is to become highly secure,” the scientists wrote in their study.
In the quantum cryptography technique explored here, the secret key is the tool that the sender and receiver (“Alice” and “Bob”) use to encode messages. For instance, Alice can send a key in the form of polarized single photons to Bob. Alice randomly polarizes the photons using either a horizontal-vertical polarizer or a polarizer with two diagonal axes. Bob also randomly uses one of the two different polarizers to detect each photon. Then, Bob asks Alice over an open channel which polarizer she used for each photon and compares them to his measurements. The measurement results for which Bob used the correct polarizer now become Alice and Bob’s secret key.
In order to copy this key and intercept a message, an eavesdropper (“Eve”) would have to correctly guess which polarizer to use on every photon that Alice sends Bob. Due to the large number of photons used, it’s unlikely that Eve could choose correctly for very long. When Eve uses an incorrect polarizer, the photon’s polarization is randomized, which makes Bob’s measurement incorrect. This error alerts Bob and Alice to an eavesdropper’s presence, which they can confirm by comparing a small subset of the key on an open line.
In the new study, the physicists showed how to steal the quantum key without being detected in an experiment on a 290-meter-long fiber link at the National University of Singapore. First, they intercepted single photons traveling along the fiber, and then re-emitted bright light pulses with the same polarization to “blind” the photodioides that Bob uses to detect photons.
When blinded, Bob’s photodiodes cannot detect single photons, but instead they respond to the intensity of incoming light pulses. For this reason, Bob can no longer randomly choose a polarizer for each measurement. In their experiment, the researchers intercepted more than 8 million photons in a five-minute span, and then re-emitted corresponding bright pulses; Bob correctly measured all of these bright pulses in the correct detector. So if Alice and Bob were to compare the subset of the key, there would be no errors, and no hint of an eavesdropper on the line.
Now that they have shown how to steal a quantum key without detection, the scientists are working on preventing these attacks from happening and making quantum cryptography more secure. One possibility is for Bob to set up a single-photon source in front of his detectors and randomly switch it on just to make sure that his detectors can still register single photons. If not, the detectors may have been “blinded” by an eavesdropper.
More information: Ilja Gerhardt, et al. "Full-field implementation of a perfect eavesdropper on a quantum cryptography system." Nature Communications, Volume: 2, Article number: 349, DOI: 10.1038/ncomms1348
via: Physics World
© 2010 PhysOrg.com
"Quantum eavesdropper steals quantum keys." June 20th, 2011. http://www.physorg.com/news/2011-06-quantum-eavesdropper-keys.html
The researchers, Ilja Gerhardt, et al., from the National University of Singapore and the University of Trondheim, have published their study in a recent issue of Nature Communications. Although this is not the first experiment to show that quantum cryptography systems are vulnerable and that a quantum key can be secretly copied, it is the first time that someone has actually copied a quantum key.
“This confirms that non-idealities in physical implementations of QKD [quantum key distribution] can be fully practically exploitable, and must be given increased scrutiny if quantum cryptography is to become highly secure,” the scientists wrote in their study.
In the quantum cryptography technique explored here, the secret key is the tool that the sender and receiver (“Alice” and “Bob”) use to encode messages. For instance, Alice can send a key in the form of polarized single photons to Bob. Alice randomly polarizes the photons using either a horizontal-vertical polarizer or a polarizer with two diagonal axes. Bob also randomly uses one of the two different polarizers to detect each photon. Then, Bob asks Alice over an open channel which polarizer she used for each photon and compares them to his measurements. The measurement results for which Bob used the correct polarizer now become Alice and Bob’s secret key.
In order to copy this key and intercept a message, an eavesdropper (“Eve”) would have to correctly guess which polarizer to use on every photon that Alice sends Bob. Due to the large number of photons used, it’s unlikely that Eve could choose correctly for very long. When Eve uses an incorrect polarizer, the photon’s polarization is randomized, which makes Bob’s measurement incorrect. This error alerts Bob and Alice to an eavesdropper’s presence, which they can confirm by comparing a small subset of the key on an open line.
In the new study, the physicists showed how to steal the quantum key without being detected in an experiment on a 290-meter-long fiber link at the National University of Singapore. First, they intercepted single photons traveling along the fiber, and then re-emitted bright light pulses with the same polarization to “blind” the photodioides that Bob uses to detect photons.
When blinded, Bob’s photodiodes cannot detect single photons, but instead they respond to the intensity of incoming light pulses. For this reason, Bob can no longer randomly choose a polarizer for each measurement. In their experiment, the researchers intercepted more than 8 million photons in a five-minute span, and then re-emitted corresponding bright pulses; Bob correctly measured all of these bright pulses in the correct detector. So if Alice and Bob were to compare the subset of the key, there would be no errors, and no hint of an eavesdropper on the line.
Now that they have shown how to steal a quantum key without detection, the scientists are working on preventing these attacks from happening and making quantum cryptography more secure. One possibility is for Bob to set up a single-photon source in front of his detectors and randomly switch it on just to make sure that his detectors can still register single photons. If not, the detectors may have been “blinded” by an eavesdropper.
More information: Ilja Gerhardt, et al. "Full-field implementation of a perfect eavesdropper on a quantum cryptography system." Nature Communications, Volume: 2, Article number: 349, DOI: 10.1038/ncomms1348
via: Physics World
© 2010 PhysOrg.com
"Quantum eavesdropper steals quantum keys." June 20th, 2011. http://www.physorg.com/news/2011-06-quantum-eavesdropper-keys.html
segunda-feira, 20 de junho de 2011
Japan gadget charges cellphone over campfire
A Japanese company has come up with a new way to charge your mobile phone after a natural disaster or in the great outdoors -- by heating a pot of water over a campfire.
The Hatsuden-Nabe thermo-electric cookpot turns heat from boiling water into electricity that feeds via a USB port into digital devices such as smartphones, music players and global positioning systems.
TES NewEnergy, based in the western city of Osaka, started selling the gadget in Japan this month for 24,150 yen ($299), and plans to market it later in developing countries with patchy power grids.
Chief executive Kazuhiro Fujita said the invention was inspired by Japan's March 11 earthquake and tsunami that left 23,000 people dead or missing, devastated the northeast region and left hundreds of thousands homeless.
"When I saw the TV footage of the quake victims making a fire to keep themselves warm, I came up with the idea of helping them to charge their mobile phones at the same time," Fujita said.
The pot features strips of ceramic thermoelectric material that generate electricity through temperature differentials between the 550 degrees Celsius at the bottom of the pot and the water boiling inside at 100 degrees.
The company says the device takes three to five hours to charge an iPhone and can heat up your lunch at the same time.
"Unlike a solar power generator, our pot can be used regardless of time of day and weather while its small size allows people to easily carry it in a bag in case of evacuation," said director and co-developer Ryoji Funahashi.
TES NewEnergy was set up in 2010 to promote products based on technology developed at the National Institute of Advanced Industrial Science and Technology, Japan's largest public research organisation.
It also makes and markets equipment to transform residual heat from industrial waste furnaces into electricity.
The company says the pot will be used mainly in emergency situations and for outdoor activities, but also has uses in developing countries.
"There are many places around the world that lack the electric power supply for charging mobile phones," Fujita said.
"In some African countries, for example, it's a bother for people to walk to places where they can charge mobile phones. We would like to offer our invention to those people."
(c) 2011 AFP
"Japan gadget charges cellphone over campfire." June 20th, 2011. http://www.physorg.com/news/2011-06-japan-gadget-cellphone-campfire.html
The Hatsuden-Nabe thermo-electric cookpot turns heat from boiling water into electricity that feeds via a USB port into digital devices such as smartphones, music players and global positioning systems.
TES NewEnergy, based in the western city of Osaka, started selling the gadget in Japan this month for 24,150 yen ($299), and plans to market it later in developing countries with patchy power grids.
Chief executive Kazuhiro Fujita said the invention was inspired by Japan's March 11 earthquake and tsunami that left 23,000 people dead or missing, devastated the northeast region and left hundreds of thousands homeless.
"When I saw the TV footage of the quake victims making a fire to keep themselves warm, I came up with the idea of helping them to charge their mobile phones at the same time," Fujita said.
The pot features strips of ceramic thermoelectric material that generate electricity through temperature differentials between the 550 degrees Celsius at the bottom of the pot and the water boiling inside at 100 degrees.
The company says the device takes three to five hours to charge an iPhone and can heat up your lunch at the same time.
"Unlike a solar power generator, our pot can be used regardless of time of day and weather while its small size allows people to easily carry it in a bag in case of evacuation," said director and co-developer Ryoji Funahashi.
TES NewEnergy was set up in 2010 to promote products based on technology developed at the National Institute of Advanced Industrial Science and Technology, Japan's largest public research organisation.
It also makes and markets equipment to transform residual heat from industrial waste furnaces into electricity.
The company says the pot will be used mainly in emergency situations and for outdoor activities, but also has uses in developing countries.
"There are many places around the world that lack the electric power supply for charging mobile phones," Fujita said.
"In some African countries, for example, it's a bother for people to walk to places where they can charge mobile phones. We would like to offer our invention to those people."
(c) 2011 AFP
"Japan gadget charges cellphone over campfire." June 20th, 2011. http://www.physorg.com/news/2011-06-japan-gadget-cellphone-campfire.html
sexta-feira, 17 de junho de 2011
Team reports scalable fabrication of self-aligned graphene transistors, circuits
(PhysOrg.com) -- Graphene, a one-atom-thick layer of graphitic carbon, has the potential to make consumer electronic devices faster and smaller. But its unique properties, and the shrinking scale of electronics, also make graphene difficult to fabricate and to produce on a large scale.
In September 2010, a UCLA research team reported that they had overcome some of these difficulties and were able to fabricate graphene transistors with unparalleled speed. These transistors used a nanowire as the self-aligned gate — the element that switches the transistor between various states. But the scalability of this approach remained an open question.
Now the researchers, using equipment from the Nanoelectronics Research Facility and the Center for High Frequency Electronics at UCLA, report that they have developed a scalable approach to fabricating these high-speed graphene transistors.
The team used a dielectrophoresis assembly approach to precisely place nanowire gate arrays on large-area chemical vapor deposition–growth graphene — as opposed to mechanically peeled graphene flakes — to enable the rational fabrication of high-speed transistor arrays. They were able to do this on a glass substrate, minimizing parasitic delay and enabling graphene transistors with extrinsic cut-off frequencies exceeding 50 GHz. Typical high-speed graphene transistors are fabricated on silicon or semi-insulating silicon carbide substrates that tend to bleed off electric charge, leading to extrinsic cut-off frequencies of around 10 GHz or less.
Taking an additional step, the UCLA team was able to use these graphene transistors to construct radio-frequency circuits functioning up to 10 GHz, a substantial improvement from previous reports of 20 MHz.
The research opens a rational pathway to scalable fabrication of high-speed, self-aligned graphene transistors and functional circuits and it demonstrates for the first time a graphene transistor with a practical (extrinsic) cutoff frequency beyond 50 GHz.
This represents a significant advance toward graphene-based, radio-frequency circuits that could be used in a variety of devices, including radios, computers and mobile phones. The technology might also be used in wireless communication, imaging and radar technologies.
The research was recently published in the peer-reviewed journal Nano Letters.
The UCLA research team included Xiangfeng Duan, professor of chemistry and biochemistry; Yu Huang, assistant professor of materials science and engineering at the Henry Samueli School of Engineering and Applied Science; Lei Liao; Jingwei Bai; Rui Cheng; Hailong Zhou; Lixin Liu; and Yuan Liu. Duan and Huang are also researchers at the California NanoSystems Institute at UCLA.
More information: Scalable Fabrication of Self-Aligned Graphene Transistors and Circuits on Glass, Nano Lett., Article ASAP, DOI: 10.1021/nl201922c
Abstract
Graphene transistors are of considerable interest for radio frequency (rf) applications. High-frequency graphene transistors with the intrinsic cutoff frequency up to 300 GHz have been demonstrated. However, the graphene transistors reported to date only exhibit a limited extrinsic cutoff frequency up to about 10 GHz, and functional graphene circuits demonstrated so far can merely operate in the tens of megahertz regime, far from the potential the graphene transistors could offer. Here we report a scalable approach to fabricate self-aligned graphene transistors with the extrinsic cutoff frequency exceeding 50 GHz and graphene circuits that can operate in the 1–10 GHz regime. The devices are fabricated on a glass substrate through a self-aligned process by using chemical vapor deposition (CVD) grown graphene and a dielectrophoretic assembled nanowire gate array. The self-aligned process allows the achievement of unprecedented performance in CVD graphene transistors with a highest transconductance of 0.36 mS/μm. The use of an insulating substrate minimizes the parasitic capacitance and has therefore enabled graphene transistors with a record-high extrinsic cutoff frequency (> 50 GHz) achieved to date. The excellent extrinsic cutoff frequency readily allows configuring the graphene transistors into frequency doubling or mixing circuits functioning in the 1–10 GHz regime, a significant advancement over previous reports (20 MHz). The studies open a pathway to scalable fabrication of high-speed graphene transistors and functional circuits and represent a significant step forward to graphene based radio frequency devices.
Provided by University of California Los Angeles
"Team reports scalable fabrication of self-aligned graphene transistors, circuits." June 17th, 2011. http://www.physorg.com/news/2011-06-team-scalable-fabrication-self-aligned-graphene.html
In September 2010, a UCLA research team reported that they had overcome some of these difficulties and were able to fabricate graphene transistors with unparalleled speed. These transistors used a nanowire as the self-aligned gate — the element that switches the transistor between various states. But the scalability of this approach remained an open question.
Now the researchers, using equipment from the Nanoelectronics Research Facility and the Center for High Frequency Electronics at UCLA, report that they have developed a scalable approach to fabricating these high-speed graphene transistors.
The team used a dielectrophoresis assembly approach to precisely place nanowire gate arrays on large-area chemical vapor deposition–growth graphene — as opposed to mechanically peeled graphene flakes — to enable the rational fabrication of high-speed transistor arrays. They were able to do this on a glass substrate, minimizing parasitic delay and enabling graphene transistors with extrinsic cut-off frequencies exceeding 50 GHz. Typical high-speed graphene transistors are fabricated on silicon or semi-insulating silicon carbide substrates that tend to bleed off electric charge, leading to extrinsic cut-off frequencies of around 10 GHz or less.
Taking an additional step, the UCLA team was able to use these graphene transistors to construct radio-frequency circuits functioning up to 10 GHz, a substantial improvement from previous reports of 20 MHz.
The research opens a rational pathway to scalable fabrication of high-speed, self-aligned graphene transistors and functional circuits and it demonstrates for the first time a graphene transistor with a practical (extrinsic) cutoff frequency beyond 50 GHz.
This represents a significant advance toward graphene-based, radio-frequency circuits that could be used in a variety of devices, including radios, computers and mobile phones. The technology might also be used in wireless communication, imaging and radar technologies.
The research was recently published in the peer-reviewed journal Nano Letters.
The UCLA research team included Xiangfeng Duan, professor of chemistry and biochemistry; Yu Huang, assistant professor of materials science and engineering at the Henry Samueli School of Engineering and Applied Science; Lei Liao; Jingwei Bai; Rui Cheng; Hailong Zhou; Lixin Liu; and Yuan Liu. Duan and Huang are also researchers at the California NanoSystems Institute at UCLA.
More information: Scalable Fabrication of Self-Aligned Graphene Transistors and Circuits on Glass, Nano Lett., Article ASAP, DOI: 10.1021/nl201922c
Abstract
Graphene transistors are of considerable interest for radio frequency (rf) applications. High-frequency graphene transistors with the intrinsic cutoff frequency up to 300 GHz have been demonstrated. However, the graphene transistors reported to date only exhibit a limited extrinsic cutoff frequency up to about 10 GHz, and functional graphene circuits demonstrated so far can merely operate in the tens of megahertz regime, far from the potential the graphene transistors could offer. Here we report a scalable approach to fabricate self-aligned graphene transistors with the extrinsic cutoff frequency exceeding 50 GHz and graphene circuits that can operate in the 1–10 GHz regime. The devices are fabricated on a glass substrate through a self-aligned process by using chemical vapor deposition (CVD) grown graphene and a dielectrophoretic assembled nanowire gate array. The self-aligned process allows the achievement of unprecedented performance in CVD graphene transistors with a highest transconductance of 0.36 mS/μm. The use of an insulating substrate minimizes the parasitic capacitance and has therefore enabled graphene transistors with a record-high extrinsic cutoff frequency (> 50 GHz) achieved to date. The excellent extrinsic cutoff frequency readily allows configuring the graphene transistors into frequency doubling or mixing circuits functioning in the 1–10 GHz regime, a significant advancement over previous reports (20 MHz). The studies open a pathway to scalable fabrication of high-speed graphene transistors and functional circuits and represent a significant step forward to graphene based radio frequency devices.
Provided by University of California Los Angeles
"Team reports scalable fabrication of self-aligned graphene transistors, circuits." June 17th, 2011. http://www.physorg.com/news/2011-06-team-scalable-fabrication-self-aligned-graphene.html
quinta-feira, 16 de junho de 2011
Quark-Gluon Plasma
Recreating the conditions present just after the Big Bang has given experimentalists a glimpse into how the universe formed. Now, scientists have begun to see striking similarities between the properties of the early universe and a theory that aims to unite gravity with quantum mechanics, a long-standing goal for physicists.
“Combining calculations from experiments and theories could help us capture some universal characteristic of nature,” said MIT theoretical physicist Krishna Rajagopal, who discussed these possibilities at the recent Quark Matter conference in Annecy, France.
One millionth of a second after the Big Bang, the universe was a hot, dense sea of freely roaming particles called quarks and gluons. As the universe rapidly cooled, the particles joined together to form protons and neutrons, and the unique state of matter known as quark-gluon plasma disappeared.
In recent years, scientists have reproduced the quark-gluon plasma by smashing together heavy ions – first with gold nuclei at the Relativistic Heavy Ion Collider, and then with heavier lead ions at the Large Hadron Collider in 2010. The energy from the collisions at the LHC is nearly 14 times higher than those at RHIC, and produces temperatures more than 100,000 times hotter than the center of the sun, enough to melt the protons and neutrons from the ions down to their quark and gluon components.
The quark-gluon plasma created inside the experiments disappears almost instantaneously – billions of times faster than it did in the early universe – as the quarks and gluons reassemble into composite particles. These particles then fly into the surrounding detectors. Researchers study them to glean information about the quark-gluon plasma.
For example, in 2005 RHIC scientists discovered that the quark-gluon plasma didn’t behave like a gas of particles traveling randomly, but rather as a nearly perfect liquid. All the particles moved with a strong sense of coordination, much like a school of fish seems to move as a single object even though individual fish may be darting around in different directions.
Although researchers have since learned a great deal about the plasma, the collective movement among the quarks and gluons continues to be a stumbling block for explaining the behavior of the exotic fluid. The tools that normally describe particles and interactions on a subatomic scale don’t work, and physicists have set out to find a better measuring stick.
“We have to figure out how to get from understanding single particles to understanding the medium as a whole,” said CERN theoretical physicist Urs Wiedemann. Surprisingly, the answer may come from string theory.
Strung together
String theory is a hypothetical description of nature that accommodates both gravity and the quantum physics that describe the other three fundamental forces: electromagnetism and the strong and weak nuclear forces. Traditionally, gravity and quantum physics don’t play well together, but string theory uses extra dimensions of space to reconcile the two.
Theorists found that the mathematics of certain quantum theories and that of objects described by gravity in one extra dimension, such as black holes, look remarkably similar. Physicists are taking advantage of this duality by translating problems from the quark-gluon plasma into the language of gravity, where the equations become much simpler. Although the translation can’t provide precise calculations, solving a problem in one realm can give valuable insights into the other.
“On the theory side, black hole physics looks the same as quark-gluon plasma physics,” said physicist David Mateos, an ICREA research professor at the University of Barcelona in Spain, who also presented his work at the conference. “The fact that the same theory can be used to describe physics with gravity and physics without gravity is truly fascinating.”
String theory provides good approximations for certain properties of the quark-gluon plasma, such as its extremely low viscosity – the fact that it can flow without experiencing much resistance – or how certain particles are affected as they travel through the dense liquid.
“String theory is like a gift to us,” Rajagopal said. “We’re challenged with understanding the quark-gluon plasma as a liquid, and while string theory doesn’t give us precision, it can help us get a feel for the shape of the subject.”
Ultimately, physicists hope to understand how quarks and gluons bind together inside the fluid to form new particles, and what causes them to stay confined. Gravity-based explanations from string theory can also be used to describe phenomena that exhibit strong interactions similar to the quark-gluon plasma, such as superconductivity or supercold atomic gases.
In turn, experimental validations of these approximations could show how string theory best represents nature and point theorists toward avenues to explore further, Weidemann said. “But for now, string theory is just a useful tool for solving a current theory of reality.”
Lauren Rugani
“Combining calculations from experiments and theories could help us capture some universal characteristic of nature,” said MIT theoretical physicist Krishna Rajagopal, who discussed these possibilities at the recent Quark Matter conference in Annecy, France.
One millionth of a second after the Big Bang, the universe was a hot, dense sea of freely roaming particles called quarks and gluons. As the universe rapidly cooled, the particles joined together to form protons and neutrons, and the unique state of matter known as quark-gluon plasma disappeared.
In recent years, scientists have reproduced the quark-gluon plasma by smashing together heavy ions – first with gold nuclei at the Relativistic Heavy Ion Collider, and then with heavier lead ions at the Large Hadron Collider in 2010. The energy from the collisions at the LHC is nearly 14 times higher than those at RHIC, and produces temperatures more than 100,000 times hotter than the center of the sun, enough to melt the protons and neutrons from the ions down to their quark and gluon components.
The quark-gluon plasma created inside the experiments disappears almost instantaneously – billions of times faster than it did in the early universe – as the quarks and gluons reassemble into composite particles. These particles then fly into the surrounding detectors. Researchers study them to glean information about the quark-gluon plasma.
For example, in 2005 RHIC scientists discovered that the quark-gluon plasma didn’t behave like a gas of particles traveling randomly, but rather as a nearly perfect liquid. All the particles moved with a strong sense of coordination, much like a school of fish seems to move as a single object even though individual fish may be darting around in different directions.
Although researchers have since learned a great deal about the plasma, the collective movement among the quarks and gluons continues to be a stumbling block for explaining the behavior of the exotic fluid. The tools that normally describe particles and interactions on a subatomic scale don’t work, and physicists have set out to find a better measuring stick.
“We have to figure out how to get from understanding single particles to understanding the medium as a whole,” said CERN theoretical physicist Urs Wiedemann. Surprisingly, the answer may come from string theory.
Strung together
String theory is a hypothetical description of nature that accommodates both gravity and the quantum physics that describe the other three fundamental forces: electromagnetism and the strong and weak nuclear forces. Traditionally, gravity and quantum physics don’t play well together, but string theory uses extra dimensions of space to reconcile the two.
Theorists found that the mathematics of certain quantum theories and that of objects described by gravity in one extra dimension, such as black holes, look remarkably similar. Physicists are taking advantage of this duality by translating problems from the quark-gluon plasma into the language of gravity, where the equations become much simpler. Although the translation can’t provide precise calculations, solving a problem in one realm can give valuable insights into the other.
“On the theory side, black hole physics looks the same as quark-gluon plasma physics,” said physicist David Mateos, an ICREA research professor at the University of Barcelona in Spain, who also presented his work at the conference. “The fact that the same theory can be used to describe physics with gravity and physics without gravity is truly fascinating.”
String theory provides good approximations for certain properties of the quark-gluon plasma, such as its extremely low viscosity – the fact that it can flow without experiencing much resistance – or how certain particles are affected as they travel through the dense liquid.
“String theory is like a gift to us,” Rajagopal said. “We’re challenged with understanding the quark-gluon plasma as a liquid, and while string theory doesn’t give us precision, it can help us get a feel for the shape of the subject.”
Ultimately, physicists hope to understand how quarks and gluons bind together inside the fluid to form new particles, and what causes them to stay confined. Gravity-based explanations from string theory can also be used to describe phenomena that exhibit strong interactions similar to the quark-gluon plasma, such as superconductivity or supercold atomic gases.
In turn, experimental validations of these approximations could show how string theory best represents nature and point theorists toward avenues to explore further, Weidemann said. “But for now, string theory is just a useful tool for solving a current theory of reality.”
Lauren Rugani
quarta-feira, 15 de junho de 2011
Research improves the bolted joints in airplanes
This is the mechanical joint used in the laboratories to study several physical characteristics. Credit: Carlos III of Madrid University
A research project at Universidad Carlos III de Madrid that analyzes the bolted joints used in the aeronautical industry has determined the optimum force that should be applied so that they may better withstand the variations in temperature that aircraft are subjected to. This advance could improve airplane design, weight and safety.
The idea for this research arose when the problems of the large structural components of an airplane were being analyzed. These components are made up of a large number of different elements, which are themselves assembled using a variety of techniques, such as soldering, mechanical or adhesive bonding or a combination of these. Of these techniques, mechanical bonding is the method most commonly used in components made of composite materials. For example, the wing of an Airbus 380 alone is composed of over 30,000 elements, with approximately 750,000 bolted joints. These joints are of key importance since they form a weak point that can contribute to the breakage of the element, as well as increase the weight of the aircraft if the design is inefficient, thus raising the aircraft's operating cost.
The researchers have analyzed the performance of these bolted joints in aeronautical structures in which mechanical elements (screws, nuts, washers) are used to join parts that are made of composite materials. Specifically, the scientists at UC3M have analyzed the influence of bolt torque (the force with which the bolt is tightened) and temperature, which varies from -50ºC, when the airplane is flying at an altitude of 10,000 meters, to 90ºC, the temperature to which a bolted joint may be exposed when it is close to a heat source. To do this, they developed a numerical model and analyzed the behavior of these joints under different conditions. "The main conclusion that we drew is that the torque of each joint should be estimated taking into account the range of temperatures to which the plate is going to be subjected, because current industry standards that are applied to determine torque do not take this effect into account", explains one of the authors of the study, Professor Enrique Barbero, head of the Advanced Materials Mechanics research group of the Department of Continuum Mechanics and Structural Analysis at UC3M, which has recently published the study in the Journal of Reinforced Plastics and Composites along with Professors María Henar Miguelez and Carlos Santiuste.
The main type of failure that they found was the crushing of carbon fiber plates against the shaft of the bolt, which is made more likely by low temperatures or by low torque levels. "At -50ºC the volume of the panels is reduced and the effect of the torque is diminished, so the joints that are subjected to these temperatures, such as those that form part of the fuselage and the external structure of the airplane, should have a greater torque so that its effect is maintained even under very low temperatures", affirms Professor Carlos Santiuste. The opposite effect can also be dangerous, the researchers point out, because when temperatures are high or when the torque is too great, the panels made of composite materials may be damaged by being compressed between the head of the bolt and the washer.
This study may have several applications. Optimizing the design of bolted joints in aeronautical structures would allow safety coefficients in the design to be reduced, which would result in a lower structural weight. This reduction would then lead to lower fuel consumption and, therefore, a reduction in the aircraft's operating costs and its impact on the environment. Along these lines, the development of more energy efficient airplanes with a lower environmental impact falls within the "Clean Sky" initiative that is part of the Seventh Framework Programme of the European Union.
This research has been approached with a global view, taking in everything from the design phase through the operation of the structural elements, giving it a markedly interdisciplinary character. In order to do this, collaboration between two research groups from different departments of the UC3M was established. These are: the Department of Advanced Materials Mechanics, coordinated by Professor Enrique Barbero, which focuses on the study of lightweight structural elements; and the Mechanical and Biomechanical Component Manufacturing and Design Technology section of the Mechanical Englineering Department, lead by Professor Mª Henar Miguelez, which works on the study of mechanized processes. In this way, the researchers were able to analyze the problems in these joints, which combine a temperature problem and a mechanical one, due to the effects of temperature and torque. "This kind of collaboration between research groups is very interesting, as the majority of the most relevant lines of research are interdisciplinary in nature, and it allows us to take on more ambitious lines of research", comments Professor Barbero. As a result of the experience gained through this project, a proposal submitted to the National R + D + i Plan has been accepted and will receive enough funding to continue this line of research for the next three years, and to carry out a series of experimental tests.
Provided by Carlos III University of Madrid
"Research improves the bolted joints in airplanes." June 13th, 2011. http://www.physorg.com/news/2011-06-joints-airplanes.html
terça-feira, 14 de junho de 2011
Making quantum cryptography truly secure
Quantum key distribution (QKD) is an advanced tool for secure computer-based interactions, providing confidential communication between two remote parties by enabling them to construct a shared secret key during the course of their conversation.
QKD is perfectly secure in principle, but researchers have long been aware that loopholes may arise when QKD is put into practice. Now, for the first time, a team of researchers at the Centre for Quantum Technologies (CQT) at the National University of Singapore, the Norwegian University of Science and Technology (NTNU) and the University Graduate Center (UNIK) in Norway have created and operated a "perfect eavesdropper" for QKD that exploits just such a loophole in a typical QKD setup. As reported in the most recent issue of Nature Communications, this eavesdropper enabled researchers to obtain an entire shared secret key without alerting either of the legitimate parties that there had been a security breach. The results highlight the importance of identifying imperfections in the implementation of QKD as a first step towards fixing them.
Cryptography has traditionally relied on mathematical conjectures and thus may always be prone to being "cracked" by a clever mathematician who can figure out how to efficiently solve a mathematical puzzle, aided by the continual development of ever-faster computers. Quantum cryptography, however, relies on the laws of physics and should be infinitely more difficult to crack than traditional approaches. While there has been much discussion of the technological vulnerabilities in quantum cryptography that might jeopardize this promise, there have been no successful full field-implemented hacks of QKD security – until now.
"Quantum key distribution has matured into a true competitor to classical key distribution. This attack highlights where we need to pay attention to ensure the security of this technology," says Christian Kurtsiefer, a professor at the Centre for Quantum Technologies at the National University of Singapore.
In the setup that was tested, researchers at the three institutions demonstrated their eavesdropping attack in realistic conditions over a 290-m fibre link between a transmitter called "Alice" and a receiver called "Bob". Alice transmits light to Bob one photon at a time, and the two build up their secret key by measuring properties of the photons. During multiple QKD sessions over a few hours, the perfect eavesdropper "Eve" obtained the same "secret" key as Bob, while the usual parameters monitored in the QKD exchange were not disturbed – meaning that Eve remained undetected.
The researchers were able to circumvent the quantum principles that in theory provide QKD its strong security by making the photon detectors in Bob behave in a classical way. The detectors were blinded, essentially overriding the system's ability to detect a breach of security. Furthermore, this technological imperfection in QKD security was breached using off-the-shelf components.
"This confirms that non-idealities in the physical implementations of QKD can be fully and practically exploitable, and must be given increased scrutiny if quantum cryptography is to become highly secure," says Vadim Makarov, a postdoctoral researcher at the University Graduate Center in Kjeller, Norway. "We can not simply delegate the burden of keeping a secret to the laws of quantum physics; we need to carefully investigate the specific devices involved," says Kurtsiefer.
The open publication of how the "perfect eavesdropper" was built has already enabled this particular loophole in QKD to be closed. "I am sure there are other problems that might show that a theoretical security analysis is not necessarily exactly the same as a real-world situation," says Ilja Gerhardt, currently a visiting scholar at the University of British Columbia in Vancouver, Canada. "But this is the usual game in cryptography – a secure communications system is created and others try to break into it. In the end this makes the different approaches better."
More information: Ilja Gerhardt, Qin Liu, Antía Lamas-Linares, Johannes Skaar, Christian Kurtsiefer, and Vadim Makarov, "Full-field implementation of a perfect eavesdropper on a quantum cryptography system," Nature Communications 2, 349 (2011). Article will be available at http://www.nature. … mms1348.html
A free preprint is available at http://arxiv.org/abs/1011.0105
Provided by National University of Singapore
"Making quantum cryptography truly secure." June 14th, 2011. http://www.physorg.com/news/2011-06-quantum-cryptography.html
QKD is perfectly secure in principle, but researchers have long been aware that loopholes may arise when QKD is put into practice. Now, for the first time, a team of researchers at the Centre for Quantum Technologies (CQT) at the National University of Singapore, the Norwegian University of Science and Technology (NTNU) and the University Graduate Center (UNIK) in Norway have created and operated a "perfect eavesdropper" for QKD that exploits just such a loophole in a typical QKD setup. As reported in the most recent issue of Nature Communications, this eavesdropper enabled researchers to obtain an entire shared secret key without alerting either of the legitimate parties that there had been a security breach. The results highlight the importance of identifying imperfections in the implementation of QKD as a first step towards fixing them.
Cryptography has traditionally relied on mathematical conjectures and thus may always be prone to being "cracked" by a clever mathematician who can figure out how to efficiently solve a mathematical puzzle, aided by the continual development of ever-faster computers. Quantum cryptography, however, relies on the laws of physics and should be infinitely more difficult to crack than traditional approaches. While there has been much discussion of the technological vulnerabilities in quantum cryptography that might jeopardize this promise, there have been no successful full field-implemented hacks of QKD security – until now.
"Quantum key distribution has matured into a true competitor to classical key distribution. This attack highlights where we need to pay attention to ensure the security of this technology," says Christian Kurtsiefer, a professor at the Centre for Quantum Technologies at the National University of Singapore.
In the setup that was tested, researchers at the three institutions demonstrated their eavesdropping attack in realistic conditions over a 290-m fibre link between a transmitter called "Alice" and a receiver called "Bob". Alice transmits light to Bob one photon at a time, and the two build up their secret key by measuring properties of the photons. During multiple QKD sessions over a few hours, the perfect eavesdropper "Eve" obtained the same "secret" key as Bob, while the usual parameters monitored in the QKD exchange were not disturbed – meaning that Eve remained undetected.
The researchers were able to circumvent the quantum principles that in theory provide QKD its strong security by making the photon detectors in Bob behave in a classical way. The detectors were blinded, essentially overriding the system's ability to detect a breach of security. Furthermore, this technological imperfection in QKD security was breached using off-the-shelf components.
"This confirms that non-idealities in the physical implementations of QKD can be fully and practically exploitable, and must be given increased scrutiny if quantum cryptography is to become highly secure," says Vadim Makarov, a postdoctoral researcher at the University Graduate Center in Kjeller, Norway. "We can not simply delegate the burden of keeping a secret to the laws of quantum physics; we need to carefully investigate the specific devices involved," says Kurtsiefer.
The open publication of how the "perfect eavesdropper" was built has already enabled this particular loophole in QKD to be closed. "I am sure there are other problems that might show that a theoretical security analysis is not necessarily exactly the same as a real-world situation," says Ilja Gerhardt, currently a visiting scholar at the University of British Columbia in Vancouver, Canada. "But this is the usual game in cryptography – a secure communications system is created and others try to break into it. In the end this makes the different approaches better."
More information: Ilja Gerhardt, Qin Liu, Antía Lamas-Linares, Johannes Skaar, Christian Kurtsiefer, and Vadim Makarov, "Full-field implementation of a perfect eavesdropper on a quantum cryptography system," Nature Communications 2, 349 (2011). Article will be available at http://www.nature. … mms1348.html
A free preprint is available at http://arxiv.org/abs/1011.0105
Provided by National University of Singapore
"Making quantum cryptography truly secure." June 14th, 2011. http://www.physorg.com/news/2011-06-quantum-cryptography.html
Scientists Create Laser With Mirrors and a Cell
By SINDYA N. BHANOO
Writing in the journal Nature Photonics, Malte Gather and Seok-Hyun Andy Yun report that they created the laser using fluorescent proteins. The glowing substance is produced by marine animals like the jellyfish Aequorea victoria, which uses it to generate a bright green glow.
“We are familiar with lasers like laser pointers and bar-code readers, and so far these lasers have all been of man-made material,” said Dr. Yun.
Dr. Gather and Dr. Yun genetically programmed cells to produce green fluorescent proteins. They then placed such a cell in a cavity with two tiny, parallel mirrors spaced seven ten-thousandths of an inch apart, large enough to hold exactly one cell. Using a microscope, they exposed the cell to pulses of blue light.
When exposed to the pulses, the fluorescent proteins in the cells started to emit green fluorescent light. The light bounced back and forth between the mirrors, amplifying to generate a bright, unidirectional laser light.
Lasers, until now thought of only as inanimate objects, are useful in medicine, telecommunication and data storage. The new finding may one day lead to more efficient and reliable laser technology using biological materials.
The cells used in the study were not damaged in the lasing action and, if damaged, have the potential to regenerate themselves, Dr. Gather said.
“Lasers have a tendency to burn out,” he said. “Normally there is very little you can do apart from replacing the whole device.”
segunda-feira, 13 de junho de 2011
Under pressure, sodium and hydrogen could undergo a metamorphosis, emerging as a superconductor
(PhysOrg.com) -- In the search for superconductors, finding ways to compress hydrogen into a metal has been a point of focus ever since scientists predicted many years ago that electricity would flow, uninhibited, through such a material.
Liquid metallic hydrogen is thought to exist in the high-gravity interiors of Jupiter and Saturn. But so far, on Earth, researchers have been unable to use static compression techniques to squeeze hydrogen under high enough pressures to convert it into a metal. Shock-wave methods have been successful, but as experiments with diamond anvil cells have shown, hydrogen remains an insulator even under pressures equivalent to those found in the Earth's core.
To circumvent the problem, a pair of University at Buffalo chemists has proposed an alternative solution for metallizing hydrogen: Add sodium to hydrogen, they say, and it just might be possible to convert the compound into a superconducting metal under significantly lower pressures.
The research, published June 10 in Physical Review Letters, details the findings of UB Assistant Professor Eva Zurek and UB postdoctoral associate Pio Baettig.
Using an open-source computer program that UB PhD student David Lonie designed, Zurek and Baettig looked for sodium polyhydrides that, under pressure, would be viable superconductor candidates. The program, XtalOpt, is an evolutionary algorithm that incorporates quantum mechanical calculations to determine the most stable geometries or crystal structures of solids.
In analyzing the results, Baettig and Zurek found that NaH9, which contains one sodium atom for every nine hydrogen atoms, is predicted to become metallic at an experimentally achievable pressure of about 250 gigapascals -- about 2.5 million times the Earth's standard atmospheric pressure, but less than the pressure at the Earth's core (about 3.5 million atmospheres).
"It is very basic research," says Zurek, a theoretical chemist. "But if one could potentially metallize hydrogen using the addition of sodium, it could ultimately help us better understand superconductors and lead to new approaches to designing a room-temperature superconductor."
By permitting electricity to travel freely, without resistance, such a superconductor could dramatically improve the efficiency of power transmission technologies.
Zurek, who joined UB in 2009, conducted research at Cornell University as a postdoctoral associate under Roald Hoffmann, a Nobel Prize-winning theoretical chemist whose research interests include the behavior of matter under high pressure.
In October 2009, Zurek co-authored a paper with Hoffman and other colleagues in the Proceedings of the National Academy of Sciences predicting that LiH6 -- a compound containing one lithium atom for every six hydrogen atoms -- could form as a stable metal at a pressure of around 1 million atmospheres.
Neither LiH6 and NaH9 exists naturally as stable compounds on Earth, but under high pressures, their structure is predicted to be stable.
"One of the things that I always like to emphasize is that chemistry is very different under high pressures," Zurek says. "Our chemical intuition is based upon our experience at one atmosphere. Under pressure, elements that do not usually combine on the Earth's surface may mix, or mix in different proportions. The insulator iodine becomes a metal, and sodium becomes insulating. Our aim is to use the results of computational experiments in order to help develop a chemical intuition under pressure, and to predict new materials with unusual properties."
Provided by University at Buffalo
"Under pressure, sodium and hydrogen could undergo a metamorphosis, emerging as a superconductor." June 13th, 2011. http://www.physorg.com/news/2011-06-pressure-sodium-hydrogen-metamorphosis-emerging.html
Liquid metallic hydrogen is thought to exist in the high-gravity interiors of Jupiter and Saturn. But so far, on Earth, researchers have been unable to use static compression techniques to squeeze hydrogen under high enough pressures to convert it into a metal. Shock-wave methods have been successful, but as experiments with diamond anvil cells have shown, hydrogen remains an insulator even under pressures equivalent to those found in the Earth's core.
To circumvent the problem, a pair of University at Buffalo chemists has proposed an alternative solution for metallizing hydrogen: Add sodium to hydrogen, they say, and it just might be possible to convert the compound into a superconducting metal under significantly lower pressures.
The research, published June 10 in Physical Review Letters, details the findings of UB Assistant Professor Eva Zurek and UB postdoctoral associate Pio Baettig.
Using an open-source computer program that UB PhD student David Lonie designed, Zurek and Baettig looked for sodium polyhydrides that, under pressure, would be viable superconductor candidates. The program, XtalOpt, is an evolutionary algorithm that incorporates quantum mechanical calculations to determine the most stable geometries or crystal structures of solids.
In analyzing the results, Baettig and Zurek found that NaH9, which contains one sodium atom for every nine hydrogen atoms, is predicted to become metallic at an experimentally achievable pressure of about 250 gigapascals -- about 2.5 million times the Earth's standard atmospheric pressure, but less than the pressure at the Earth's core (about 3.5 million atmospheres).
"It is very basic research," says Zurek, a theoretical chemist. "But if one could potentially metallize hydrogen using the addition of sodium, it could ultimately help us better understand superconductors and lead to new approaches to designing a room-temperature superconductor."
By permitting electricity to travel freely, without resistance, such a superconductor could dramatically improve the efficiency of power transmission technologies.
Zurek, who joined UB in 2009, conducted research at Cornell University as a postdoctoral associate under Roald Hoffmann, a Nobel Prize-winning theoretical chemist whose research interests include the behavior of matter under high pressure.
In October 2009, Zurek co-authored a paper with Hoffman and other colleagues in the Proceedings of the National Academy of Sciences predicting that LiH6 -- a compound containing one lithium atom for every six hydrogen atoms -- could form as a stable metal at a pressure of around 1 million atmospheres.
Neither LiH6 and NaH9 exists naturally as stable compounds on Earth, but under high pressures, their structure is predicted to be stable.
"One of the things that I always like to emphasize is that chemistry is very different under high pressures," Zurek says. "Our chemical intuition is based upon our experience at one atmosphere. Under pressure, elements that do not usually combine on the Earth's surface may mix, or mix in different proportions. The insulator iodine becomes a metal, and sodium becomes insulating. Our aim is to use the results of computational experiments in order to help develop a chemical intuition under pressure, and to predict new materials with unusual properties."
Provided by University at Buffalo
"Under pressure, sodium and hydrogen could undergo a metamorphosis, emerging as a superconductor." June 13th, 2011. http://www.physorg.com/news/2011-06-pressure-sodium-hydrogen-metamorphosis-emerging.html
A Preview of Future Disk Drives
A new type of data storage technology, called phase-change memory, has proven capable of writing some types of data faster than conventional flash based storage. The tests used a hard drive based on prototype phase-change memory chips.
Disks based on solid-state, flash memory chips are increasingly used in computers and servers because they perform faster than conventional magnetic hard drives. The performance of the experimental phase-change disk drive, created by researchers at University of California San Diego, suggests that it won't be long before that technology is able to give computing devices another speed boost.
The prototype created by the researchers is the first to publically benchmark the performance of a phase-change memory chips working in a disk drive. Several semiconductor companies are working on phase-change chips, but they have not released information about storage devices built with them.
"Phase-change chips are not quite ready for prime time, but if the technology continues to develop, this is what [solid state drives] will look like in the next few years," says Steve Swanson, who built the prototype, known as Onyx, with colleagues. It had a data capacity of eight gigabytes and went head-to-head with what Swanson calls a "high-end" 80 GB flash drive made for use in servers.
When it came to writing small chunks of data on the order of kilobytes in size, Onyx was between 70 percent and 120 percent faster than the commercial drive. At the same time, the prototype placed significantly less computational load on the processor of the computer using it. It was also much faster at reading data than the flash drive when accessing blocks of data of any size. The kind of large volume, small read and write patterns that Onyx excelled at are a hallmark of the type of calculations involved in analyzing social networks like those of Twitter, says Swanson. However, Onyx was much slower at writing larger chunks of data than its commercially established competitor.
Onyx was built using prototype phase-change chips made by Micron, a company working to commercialize the technology. The chips store data in a a type of glass, using small bursts of heat to switch sections of the material between two different states, or phases, that represent digital 1s and 0s. In one phase, the atoms of the glass are arranged in an ordered crystal lattice, in the other they have an amorphous, disorganized arrangement.
Onyx's performance springs from the much simpler process of writing data to a phase-change chip compared to a flash chip, which stores data as islands of electric charge on chunks of semiconductor, says Swanson. Flash chips cannot rewrite single bits of information—1s or 0s—on demand. Instead they have to erase data in "pages" of a fixed size and then go back to program in the desired data. That limits the technology's speed. "It requires a flash memory device to have software keep a little log as it goes along of which data is correct," says Swanson. "With phase-change memory you can just arbitrarily rewrite what you need."
Sudanva Gurumurthi, who researches computer architecture at Virginia Tech, says the San Diego project is a valuable demonstration of the true capabilities of phase-change memory chips. "Much research has simulated how they would perform, but this gives insights into complexities a simulation can't capture," he says. But it will be the price of the technology that will determine when it becomes a competitive technology, says Gurumurthi.
Gurumurthi's research suggests that using phase-change memory in combination with flash memory could see the new technology reach the market earlier than the day it is cheap enough to be used in dedicated drives. Simulations showed that adding a small buffer of phase-change memory to a flash-based drive could simplify the process of writing small chunks of data, the kind of operation where flash performs least well. "We found it significantly improves performance," says Gurumurthi. "That might be enough to offset the cost of adding a small amount of phase-change memory."
Disks based on solid-state, flash memory chips are increasingly used in computers and servers because they perform faster than conventional magnetic hard drives. The performance of the experimental phase-change disk drive, created by researchers at University of California San Diego, suggests that it won't be long before that technology is able to give computing devices another speed boost.
The prototype created by the researchers is the first to publically benchmark the performance of a phase-change memory chips working in a disk drive. Several semiconductor companies are working on phase-change chips, but they have not released information about storage devices built with them.
"Phase-change chips are not quite ready for prime time, but if the technology continues to develop, this is what [solid state drives] will look like in the next few years," says Steve Swanson, who built the prototype, known as Onyx, with colleagues. It had a data capacity of eight gigabytes and went head-to-head with what Swanson calls a "high-end" 80 GB flash drive made for use in servers.
When it came to writing small chunks of data on the order of kilobytes in size, Onyx was between 70 percent and 120 percent faster than the commercial drive. At the same time, the prototype placed significantly less computational load on the processor of the computer using it. It was also much faster at reading data than the flash drive when accessing blocks of data of any size. The kind of large volume, small read and write patterns that Onyx excelled at are a hallmark of the type of calculations involved in analyzing social networks like those of Twitter, says Swanson. However, Onyx was much slower at writing larger chunks of data than its commercially established competitor.
Onyx was built using prototype phase-change chips made by Micron, a company working to commercialize the technology. The chips store data in a a type of glass, using small bursts of heat to switch sections of the material between two different states, or phases, that represent digital 1s and 0s. In one phase, the atoms of the glass are arranged in an ordered crystal lattice, in the other they have an amorphous, disorganized arrangement.
Onyx's performance springs from the much simpler process of writing data to a phase-change chip compared to a flash chip, which stores data as islands of electric charge on chunks of semiconductor, says Swanson. Flash chips cannot rewrite single bits of information—1s or 0s—on demand. Instead they have to erase data in "pages" of a fixed size and then go back to program in the desired data. That limits the technology's speed. "It requires a flash memory device to have software keep a little log as it goes along of which data is correct," says Swanson. "With phase-change memory you can just arbitrarily rewrite what you need."
Sudanva Gurumurthi, who researches computer architecture at Virginia Tech, says the San Diego project is a valuable demonstration of the true capabilities of phase-change memory chips. "Much research has simulated how they would perform, but this gives insights into complexities a simulation can't capture," he says. But it will be the price of the technology that will determine when it becomes a competitive technology, says Gurumurthi.
Gurumurthi's research suggests that using phase-change memory in combination with flash memory could see the new technology reach the market earlier than the day it is cheap enough to be used in dedicated drives. Simulations showed that adding a small buffer of phase-change memory to a flash-based drive could simplify the process of writing small chunks of data, the kind of operation where flash performs least well. "We found it significantly improves performance," says Gurumurthi. "That might be enough to offset the cost of adding a small amount of phase-change memory."
domingo, 12 de junho de 2011
Graphene Transistors that Can Work at Blistering Speeds
IBM shows that graphene transistors could one day replace silicon.
By Katherine Bourzac
IBM has created graphene transistors that leave silicon ones in the dust. The prototype devices, made from atom-thick sheets of carbon, operate at 100 gigahertz--meaning they can switch on and off 100 billion times each second, about 10 times as fast as the speediest silicon transistors.
The transistors were created using processes that are compatible with existing semiconductor manufacturing, and experts say they could be scaled up to produce transistors for high-performance imaging, radar, and communications devices within the next few years, and for zippy computer processors in a decade or so.
Researchers have previously made graphene transistors using laborious mechanical methods, for example by flaking off sheets of graphene from graphite; the fastest transistors made this way have reached speeds of up to 26 gigahertz. Transistors made using similar methods have not equaled these speeds.
Growing transistors on a wafer not only leads to better performance, it's also more commercially feasible, says Phaedon Avouris, leader of the nanoscale science and technology group at the IBM Watson Research Center in Ossining, NY where the work was carried out.
Ultimately, graphene has the potential to replace silicon in high-speed computer processors. As computers get faster each year, silicon is getting closer and closer to its physical limits, and graphene provides a promising potential replacement because electrons move through the material much faster than they do through silicon. "Even without optimizing the design, these transistors are already 2.5 times better than silicon," says Yu-Ming Lin, another researcher at IBM Watson who collaborated with Avouris.
Other researchers have made very fast transistors using expensive semiconductor materials such as indium phosphide, but these devices only operate at low temperatures. In theory, graphene has the material properties needed to let transistors run at terahertz speeds at room temperature.
The IBM researchers grew the graphene on the surface of a two-inch silicon-carbide wafer. The process starts when they heat the wafer until the silicon evaporates, leaving behind a thin layer of carbon, known as epitaxial graphene. This technique has been used to make transistors before, but the IBM team improved the process by using better materials for the other parts of the transistor, in particular the insulator.
"Graphene's properties are very sensitive to its environment," says Lin. This is why the IBM group focused on designing a new insulating layer--the part of the transistor that prevents short circuits. They found that adding a thin layer of a polymer between the dielectric and the graphene improved performance. The work is described this week in the journal Science.
Walter de Heer, a professor of physics at Georgia Tech in Atlanta who pioneered methods used to work with epitaxial graphene, says the IBM device is a milestone because of its speed and because it was made using practical fabrication techniques. "This is not pie-in-the-sky stuff, this is real," he says. "This development is really going to turn into a communications device not too long from now."
"One can apply the same processing technologies to get much closer to a product," says Avouris. Last year, the same IBM group, and an independent group at HRL Laboratories in Malibu, CA, both made 10 gigahertz graphene transistors using an involved method called mechanical exfoliation. This process involves peeling away layers from a small piece of graphite until a single, atom-thick sheet remains, then setting that down on a substrate and carving it to form a transistor. The problem with this approach is that it compromises graphene's electrical properties and is not commercially scalable, says Avouris.
The first applications of graphene transistors will likely be as switches and amplifiers in analog military electronics. Indeed, the IBM group's work is supported in part by the Defense Advanced Research Projects Agency. But the researchers say it will be years before the company begins commercial development on carbon electronics.
De Heer notes that the IBM devices don't yet realize graphene's full potential. By carefully controlling the growing conditions, his group has made graphene that conducts electrons 10 times faster than the material used by the IBM team. This higher-quality graphene could, in theory, be used to make transistors that reach terahertz speeds, though de Heer says many things could go wrong during scale-up.
Avouris says the IBM team will work to improve its transistors' speed by miniaturizing them. The ones it has made so far are 240 nanometers long, which is relatively large--silicon electronic components are down to about 20 nanometers. Avouris also believes that their performance could be improved by making the insulating layer thinner. "The next step is to try and integrate these transistors into a truly operational circuit," he says.
Copyright Technology Review 2011.
The transistors were created using processes that are compatible with existing semiconductor manufacturing, and experts say they could be scaled up to produce transistors for high-performance imaging, radar, and communications devices within the next few years, and for zippy computer processors in a decade or so.
Researchers have previously made graphene transistors using laborious mechanical methods, for example by flaking off sheets of graphene from graphite; the fastest transistors made this way have reached speeds of up to 26 gigahertz. Transistors made using similar methods have not equaled these speeds.
Growing transistors on a wafer not only leads to better performance, it's also more commercially feasible, says Phaedon Avouris, leader of the nanoscale science and technology group at the IBM Watson Research Center in Ossining, NY where the work was carried out.
Ultimately, graphene has the potential to replace silicon in high-speed computer processors. As computers get faster each year, silicon is getting closer and closer to its physical limits, and graphene provides a promising potential replacement because electrons move through the material much faster than they do through silicon. "Even without optimizing the design, these transistors are already 2.5 times better than silicon," says Yu-Ming Lin, another researcher at IBM Watson who collaborated with Avouris.
Other researchers have made very fast transistors using expensive semiconductor materials such as indium phosphide, but these devices only operate at low temperatures. In theory, graphene has the material properties needed to let transistors run at terahertz speeds at room temperature.
The IBM researchers grew the graphene on the surface of a two-inch silicon-carbide wafer. The process starts when they heat the wafer until the silicon evaporates, leaving behind a thin layer of carbon, known as epitaxial graphene. This technique has been used to make transistors before, but the IBM team improved the process by using better materials for the other parts of the transistor, in particular the insulator.
"Graphene's properties are very sensitive to its environment," says Lin. This is why the IBM group focused on designing a new insulating layer--the part of the transistor that prevents short circuits. They found that adding a thin layer of a polymer between the dielectric and the graphene improved performance. The work is described this week in the journal Science.
Walter de Heer, a professor of physics at Georgia Tech in Atlanta who pioneered methods used to work with epitaxial graphene, says the IBM device is a milestone because of its speed and because it was made using practical fabrication techniques. "This is not pie-in-the-sky stuff, this is real," he says. "This development is really going to turn into a communications device not too long from now."
"One can apply the same processing technologies to get much closer to a product," says Avouris. Last year, the same IBM group, and an independent group at HRL Laboratories in Malibu, CA, both made 10 gigahertz graphene transistors using an involved method called mechanical exfoliation. This process involves peeling away layers from a small piece of graphite until a single, atom-thick sheet remains, then setting that down on a substrate and carving it to form a transistor. The problem with this approach is that it compromises graphene's electrical properties and is not commercially scalable, says Avouris.
The first applications of graphene transistors will likely be as switches and amplifiers in analog military electronics. Indeed, the IBM group's work is supported in part by the Defense Advanced Research Projects Agency. But the researchers say it will be years before the company begins commercial development on carbon electronics.
De Heer notes that the IBM devices don't yet realize graphene's full potential. By carefully controlling the growing conditions, his group has made graphene that conducts electrons 10 times faster than the material used by the IBM team. This higher-quality graphene could, in theory, be used to make transistors that reach terahertz speeds, though de Heer says many things could go wrong during scale-up.
Avouris says the IBM team will work to improve its transistors' speed by miniaturizing them. The ones it has made so far are 240 nanometers long, which is relatively large--silicon electronic components are down to about 20 nanometers. Avouris also believes that their performance could be improved by making the insulating layer thinner. "The next step is to try and integrate these transistors into a truly operational circuit," he says.
Speedier Cell Phone Circuitry
IBM has shown graphene could replace other materials in circuits that handle wireless signals.
By Katherine Bourzac
Researchers at IBM have made the fastest integrated circuits yet from graphene, a material that promises much faster components than silicon allows but which has proven difficult to work with.
The team showed that graphene could be used to make faster, more power-efficient versions of circuits that process and generate radio signals in cell phones and other wireless devices. They did so using existing manufacturing techniques, suggesting their designs could be affordable enough to commercialize.
"This is really exciting work and it points to the rapidly approaching future of graphene electronics," says James Tour, professor of chemistry and computer science at Rice University in Houston, Texas, who was not involved with the work.
Graphene, a single-atom-thick mesh of carbon atoms, conducts electrons much faster than silicon. Its electronic properties are such that its greatest promise is not for the digital logic circuits found in microprocessors, but for speedy analog electronics, like those made by the IBM team.
Researchers first demonstrated graphene's electrical promise in 2004 but engineers have since struggled to build graphene circuits using existing manufacturing technology. So far, researchers have made graphene transistors that can operate at speeds of 300 gigahertz, which means they switch on and off 300 billion times a second, thirty times faster than the best silicon transistors.
To make their integrated circuits the IBM researchers had to combine their graphene transistors with other materials, a challenge for two reasons. First, when graphene transistors are positioned too close to certain metals, the transistor performance degenerates. Second, putting graphene transistors and other elements on a single microchip is tricky. Today, in the journal Science, the IBM researchers report methods for making graphene integrated circuits on single chips using existing methods.
The IBM group made a type of circuit called a frequency mixer, combining one graphene transistor and two metal devices called inductors. "The frequency mixer is one of the basic building blocks of analog electronics, and wireless communications in particular," says IBM researcher Yu-Ming Lin. These devices are used in cell phones to convert the radio signal used to transmit information into another signal in a frequency range that the human ear can hear. That's accomplished by mixing the radio signal with a reference signal.
Previously, combining graphene with other materials has degraded the speed of the resulting electronics. The IBM group prevented this by making sure other materials didn't contact the graphene in a harmful way. They made arrays of graphene transistors on the surface of silicon carbide wafers coated with graphene. They then etched away the extra graphene surrounding the transistors, leaving a clear surface that was easier for metal inductors to stick to. Ensuring separation between the graphene transistors and the metal inductors also prevented degradation of the transistor's electrical properties.
The resulting circuits operate at 10 gigahertz—much faster than previous graphene circuits. Lin concedes that they are less reliable than the state of the art silicon frequency mixers but says they expect to close that gap soon.
The IBM researchers plan to make them on the scale of tens rather than hundreds of nanometers. "They can easily be ten times smaller, which would help us surpass the record," says Lin. "We haven't seen the limits of graphene devices in terms of speed—we think they can get into the terahertz range."
The next step is to improve the reliability of the circuits, says Xiangfeng Duan, professor of chemistry at the University of California, Los Angeles. "The signal comes out weaker at the other end," he notes. "Improving the transistors will help get better circuit performance."
The IBM group is working on this problem, and is developing more complex graphene integrated circuits. Lin says the method used for the frequency-mixer circuits will work for other types of circuits. "This is the first step towards a new level of potential," he says. "Perhaps we won't see the real impact of graphene for another five to ten years."
Copyright Technology Review 2011.
The team showed that graphene could be used to make faster, more power-efficient versions of circuits that process and generate radio signals in cell phones and other wireless devices. They did so using existing manufacturing techniques, suggesting their designs could be affordable enough to commercialize.
"This is really exciting work and it points to the rapidly approaching future of graphene electronics," says James Tour, professor of chemistry and computer science at Rice University in Houston, Texas, who was not involved with the work.
Graphene, a single-atom-thick mesh of carbon atoms, conducts electrons much faster than silicon. Its electronic properties are such that its greatest promise is not for the digital logic circuits found in microprocessors, but for speedy analog electronics, like those made by the IBM team.
Researchers first demonstrated graphene's electrical promise in 2004 but engineers have since struggled to build graphene circuits using existing manufacturing technology. So far, researchers have made graphene transistors that can operate at speeds of 300 gigahertz, which means they switch on and off 300 billion times a second, thirty times faster than the best silicon transistors.
To make their integrated circuits the IBM researchers had to combine their graphene transistors with other materials, a challenge for two reasons. First, when graphene transistors are positioned too close to certain metals, the transistor performance degenerates. Second, putting graphene transistors and other elements on a single microchip is tricky. Today, in the journal Science, the IBM researchers report methods for making graphene integrated circuits on single chips using existing methods.
The IBM group made a type of circuit called a frequency mixer, combining one graphene transistor and two metal devices called inductors. "The frequency mixer is one of the basic building blocks of analog electronics, and wireless communications in particular," says IBM researcher Yu-Ming Lin. These devices are used in cell phones to convert the radio signal used to transmit information into another signal in a frequency range that the human ear can hear. That's accomplished by mixing the radio signal with a reference signal.
Previously, combining graphene with other materials has degraded the speed of the resulting electronics. The IBM group prevented this by making sure other materials didn't contact the graphene in a harmful way. They made arrays of graphene transistors on the surface of silicon carbide wafers coated with graphene. They then etched away the extra graphene surrounding the transistors, leaving a clear surface that was easier for metal inductors to stick to. Ensuring separation between the graphene transistors and the metal inductors also prevented degradation of the transistor's electrical properties.
The resulting circuits operate at 10 gigahertz—much faster than previous graphene circuits. Lin concedes that they are less reliable than the state of the art silicon frequency mixers but says they expect to close that gap soon.
The IBM researchers plan to make them on the scale of tens rather than hundreds of nanometers. "They can easily be ten times smaller, which would help us surpass the record," says Lin. "We haven't seen the limits of graphene devices in terms of speed—we think they can get into the terahertz range."
The next step is to improve the reliability of the circuits, says Xiangfeng Duan, professor of chemistry at the University of California, Los Angeles. "The signal comes out weaker at the other end," he notes. "Improving the transistors will help get better circuit performance."
The IBM group is working on this problem, and is developing more complex graphene integrated circuits. Lin says the method used for the frequency-mixer circuits will work for other types of circuits. "This is the first step towards a new level of potential," he says. "Perhaps we won't see the real impact of graphene for another five to ten years."
Assinar:
Comentários (Atom)