terça-feira, 26 de julho de 2011
Tecnologia de discos rígidos dá Prêmio Nobel de Física a fundadores da spintrônica
Tecnologia de discos rígidos dá Prêmio Nobel de Física a fundadores da spintrônica: "Apenas nove anos depois da descoberta, considerado um avanço típico da ciência básica, chegaram ao mercado os primeiros discos rígidos com cabeças de leitura que funcionavam com base no efeito da magnetoresistência gigante (GMR)."
Eletrônica molecular e spintrônica convergem em molécula orgânica
Eletrônica molecular e spintrônica convergem em molécula orgânica: "Um efeito que valeu o Prêmio Nobel de Física foi observado pela primeira vez em uma única molécula orgânica, convergindo dois dos campos mais promissores para o futuro da tecnologia digital."
segunda-feira, 25 de julho de 2011
Físicos domam a decoerência e abrem caminho para computação quântica
Físicos domam a decoerência e abrem caminho para computação quântica: "Entre nós e o futuro dos computadores quânticos há um fenômeno chamado decoerência. Agora os físicos demonstraram que esse dragão pode ser domado."
sexta-feira, 22 de julho de 2011
Tevatron experiments close in on favored Higgs mass range
The Higgs particle, if it exists, most likely has a mass between 114-137 GeV/c2, about 100 times the mass of a proton. This predicted mass range is based on stringent constraints established by earlier measurements, including the highest precision measurements of the top quark and W boson masses, made by Tevatron experiments. If the Higgs particle does not exist, Fermilab’s Tevatron experiments are on track to rule out this Higgs mass range in 2012.
If the Higgs particle does exist, then the Tevatron experiments may soon begin to find an excess of Higgs-like decay events. With the number of collisions recorded to date, the Tevatron experiments are currently unique in their ability to study the decays of Higgs particles into bottom quarks. This signature is crucial for understanding the nature and behavior of the Higgs particle.
“Both the DZero and CDF experiments have now analyzed about two-thirds of the data that we expect to have at the end of the Tevatron run on September 30,” said Stefan Soldner-Rembold, co-spokesperson of the DZero experiment. “In the coming months, we will continue to improve our analysis methods and continue to analyze our full data sets. The search for the Higgs boson is entering its most exciting, final stage.”
For the first time, the CDF and DZero collaborations have successfully applied well-established techniques used to search for the Higgs boson to observe extremely rare collisions that produce pairs of heavy bosons (WW or WZ) that decay into heavy quarks. This well-known process closely mimics the production of a W boson and a Higgs particle, with the Higgs decaying into a bottom quark and antiquark pair—the main signature that both Tevatron experiments currently use to search for a Higgs particle. This is another milestone in a years-long quest by both experiments to observe signatures that are increasingly rare and similar to the Higgs particle.
“This specific type of decay has never been measured before, and it gives us great confidence that our analysis works as we expect, and that we really are on the doorsteps of the Higgs particle,” said Giovanni Punzi, co-spokesperson for the CDF collaboration.
To obtain their latest Higgs search results, the CDF and DZero analysis groups separately sifted through more than 700,000 billion proton-antiproton collisions that the Tevatron has delivered to each experiment since 2001. After the two groups obtained their independent Higgs search results, they combined their results. Tevatron physicist Eric James will present the joint CDF-DZero search for the Higgs particle on Wednesday, July 27, at the EPS conference.
Provided by Fermilab (news : web)
If the Higgs particle does exist, then the Tevatron experiments may soon begin to find an excess of Higgs-like decay events. With the number of collisions recorded to date, the Tevatron experiments are currently unique in their ability to study the decays of Higgs particles into bottom quarks. This signature is crucial for understanding the nature and behavior of the Higgs particle.
“Both the DZero and CDF experiments have now analyzed about two-thirds of the data that we expect to have at the end of the Tevatron run on September 30,” said Stefan Soldner-Rembold, co-spokesperson of the DZero experiment. “In the coming months, we will continue to improve our analysis methods and continue to analyze our full data sets. The search for the Higgs boson is entering its most exciting, final stage.”
For the first time, the CDF and DZero collaborations have successfully applied well-established techniques used to search for the Higgs boson to observe extremely rare collisions that produce pairs of heavy bosons (WW or WZ) that decay into heavy quarks. This well-known process closely mimics the production of a W boson and a Higgs particle, with the Higgs decaying into a bottom quark and antiquark pair—the main signature that both Tevatron experiments currently use to search for a Higgs particle. This is another milestone in a years-long quest by both experiments to observe signatures that are increasingly rare and similar to the Higgs particle.
“This specific type of decay has never been measured before, and it gives us great confidence that our analysis works as we expect, and that we really are on the doorsteps of the Higgs particle,” said Giovanni Punzi, co-spokesperson for the CDF collaboration.
To obtain their latest Higgs search results, the CDF and DZero analysis groups separately sifted through more than 700,000 billion proton-antiproton collisions that the Tevatron has delivered to each experiment since 2001. After the two groups obtained their independent Higgs search results, they combined their results. Tevatron physicist Eric James will present the joint CDF-DZero search for the Higgs particle on Wednesday, July 27, at the EPS conference.
Provided by Fermilab (news : web)
quinta-feira, 21 de julho de 2011
IBM apresenta avanços na memória por mudança de fase (PCM)
IBM apresenta avanços na memória por mudança de fase (PCM): "A memória de mudança de fase armazena os bits pela alteração de fase - amorfa ou cristalina - do material usado em sua construção, uma liga de vários elementos."
quarta-feira, 20 de julho de 2011
Brilho na ionosfera anuncia tsunami com uma hora de antecedência
Brilho na ionosfera anuncia tsunami com uma hora de antecedência: "Pesquisadores descobriram que pode ser possível detectar mais rapidamente um tsunami olhando para o ar do que para a água."
terça-feira, 19 de julho de 2011
Chip 3D integra processador e memória
Chip 3D integra processador e memória: "O objetivo é estabelecer os parâmetros de operação de um processador comercial encapsulado em um chip 3D, sobretudo com relação à dissipação de calor."
New battery engineering
It is hard to recall a new automotive technology that has received as much attention from nonautomotive media as vehicle electrification and, more specifically, batteries. Environmental publications debate the impact of electric vehicles (EVs) on planet Earth; business publications debate the viability and profitability of battery suppliers; investment publications analyze start-up EV and battery company IPOs; trade journals for minerals and raw materials debate supplies of lithium and rare earth metals; national news outlets cover a U.S. president’s visit to a battery plant groundbreaking; and Jay Leno holds celebrity EV races.
But for all their glamor and promise, emerging EV technologies face age-old challenges in their quest for introduction into high-volume production—namely, demonstration of safety, quality, and value. OEMs will not and cannot accept even the most promising new technologies until these fundamental attributes are demonstrated.
Large-format lithium-ion batteries are a good “new-tech” example. Consider the complexity: electrochemical reactions + embedded electronics and software + large package geometry and mass + high voltage + thermal management. Traditional engineering disciplines are more important than ever in this domain: failure mode effects and analysis, computer-aided engineering, computational fluid dynamics, and electronics circuit simulation, to name a few.
Battery-cell design alone requires optimization of chemistries for anodes, cathodes, and electrolytes; unique mechanical properties of separators; bonding techniques for electrodes, tabs, and casings; and embedded safety features. Electronic circuits measure and report voltages, currents, and temperatures while also controlling relays and thermal management components (fans, pumps, chillers). Embedded software algorithms calculate state-of-charge and power limits, perform real-time diagnostics, and make decisions on when to cool the pack, balance cells, or stop taking charge.
Systems engineering takes a lead role in tying these complex subsystems together. For instance, battery “life” (that is, the period of time over which energy storage capacity is reduced to a predetermined percentage of its beginning-of-life value) is dependent on many factors, such as cumulative time of exposure to high temperatures, number and nature of charge/discharge cycles, and mechanical integrity of the cell retention structures.
The battery systems engineer, therefore, must consider trade-offs in several areas to ensure battery life exceeds 10 years at expected levels of performance while not adding features that increase cost or mass. For example, investing in higher accuracy voltage and current measurement methods will allow cells to be operated closer to their full Vmin/Vmax range; active heating and cooling can extend cell life and range of operation but adds mass and controls complexity. How much electronics integration should occur at the module level vs. a pack-level centralized circuit topology?
The automobile—and automobile engineering—has been a remarkably complex and resilient industry over the past 100 years. More recently, vehicle electrification has attracted government grants, venture capitalism, Hollywood, and “Green Faith” to the conversation. However, commercially viable volumes will not be realized until safety, quality, and value are firmly established. And those attributes are the essence of good automotive engineering. It’s what we do.
Martin Klein, Engineering Director, LG Chem Power Inc.
Sensitive to electrostatic effects
With the current high price of fuels, the mind-set of most people regarding ownership of large inefficient vehicles has changed quite a bit. The request for fuel-saving cars, including hybrid cars and even completely electric cars, is getting louder.
These types of vehicles need a number of electronic control units (ECUs) for more efficient engine management, fuel-saving operation, battery operation, and the like. Not only will power management require increased electronics in the car; so will safety features (ABS, airbags, distance control, etc.) and “luxury features” such as Internet access.
Additionally, hydraulic units (such as those for steering) in past automotive design will be done electrically. Vehicles are now evolving into computers on wheels, which can be used to travel from A to B with high speed, high comfort, and, of course, high safety. The electronic parts of this “driving” computer are getting smaller and smaller and more sensitive to electrostatic effects—but their reliability must be maintained.
Protection against electrostatic discharge (ESD) can be realized to a certain extent on the semiconductor itself. This on-chip protection is guaranteeing safe handling at the ECU manufacturer in an ESD protected area having basic ESD protective measures. Manufacturers are following international standards such as ANSI/ESD S20.20 or IEC 61340-5-1 so that even very sensitive parts can safely be handled.
The ECU will then be implemented in the vehicle, or certain parts of it (such as door modules), at the car manufacturer or its direct suppliers. These assembly lines often have no or very limited ESD handling measures. Therefore, the ECU must possess a much higher robustness against ESD.
There is always a debate as to what the right (voltage) level of protection is and what the right test to guarantee this robustness is. Is it more efficient to make the semiconductor devices themselves more robust or to increase the protection on board? Resistors and capacitors that are needed for performance or electromagnetic compatibility (EMC) are common measures. Extra-protective elements such as transient voltage suppression diodes are, of course, an excellent choice but could affect the performance. They are considered additional elements to be placed on board for assembly, which is a source of failure the manufacturer wants to avoid, and they add extra cost to the ECU and therefore the car.
All of these possibilities have advantages and disadvantages depending on the specific application, cost pressure, and expertise of the board designer. In the past, automotive suppliers and original equipment manufacturers did not discuss this topic in an effective way.
The challenge for the creators of the next generation of cars will be to develop the right ESD protection needed at the best price, the most efficient way, and with the highest reliability. Therefore, the dialogue between the car manufacturer, the ECU designer, and the semiconductor manufacturer (who has started the discussion of the ESD topic) must be expanded to avoid ESD problems in the field.
Reinhold Gaertner, EOS/ESD Association Board of Directors
segunda-feira, 18 de julho de 2011
Memristor - missing fourth electronic circuit element
Researchers at HP Labs have built the first working prototypes of an important new electronic component that may lead to instant-on PCs as well as analog computers that process information the way the human brain does.
The new component is called a memristor, or memory resistor. Up until today, the circuit element had only been described in a series of mathematical equations written by Leon Chua, who in 1971 was an engineering student studying non-linear circuits. Chua knew the circuit element should exist — he even accurately outlined its properties and how it would work. Unfortunately, neither he nor the rest of the engineering community could come up with a physical manifestation that matched his mathematical expression.
Thirty-seven years later, a group of scientists from HP Labs has finally built real working memristors, thus adding a fourth basic circuit element to electrical circuit theory, one that will join the three better-known ones: the capacitor, resistor and the inductor.
Researchers believe the discovery will pave the way for instant-on PCs, more energy-efficient computers, and new analog computers that can process and associate information in a manner similar to that of the human brain.
According to R. Stanley Williams, one of four researchers at HP Labs’ Information and Quantum Systems Lab who made the discovery, the most interesting characteristic of a memristor device is that it remembers the amount of charge that flows through it.
Indeed, Chua’s original idea was that the resistance of a memristor would depend upon how much charge has gone through the device. In other words, you can flow the charge in one direction and the resistance will increase. If you push the charge in the opposite direction it will decrease. Put simply, the resistance of the devices at any point in time is a function of history of the device –- or how much charge went through it either forwards or backwards. That simple idea, now that it has been proven, will have profound effect on computing and computer science.
"Part of what’s going to come out of this is something none of us can imagine yet," says Williams. "But what we can imagine in and of itself is actually pretty cool."
For one thing, Williams says these memristors can be used as either digital switches or to build a new breed of analog devices.
For the former, Williams says scientists can now think about fabricating a new type of non-volatile random access memory (RAM) – or memory chips that don’t forget what power state they were in when a computer is shut off.
That’s the big problem with DRAM today, he says. "When you turn the power off on your PC, the DRAM forgets what was there. So the next time you turn the power on you’ve got to sit there and wait while all of this stuff that you need to run your computer is loaded into the DRAM from the hard disk."
With non-volatile RAM, that process would be instantaneous and your PC would be in the same state as when you turned it off.
Scientists also envision building other types of circuits in which the memristor would be used as an analog device.
Indeed, Leon himself noted the similarity between his own predictions of the properties for a memristor and what was then known about synapses in the brain. One of his suggestions was that you could perhaps do some type of neuronal computing using memristors. HP Labs thinks that’s actually a very good idea.
"Building an analog computer in which you don’t use 1s and 0s and instead use essentially all shades of gray in between is one of the things we’re already working on," says Williams. These computers could do the types of things that digital computers aren’t very good at –- like making decisions, determining that one thing is larger than another, or even learning.
While a lot of researchers are currently trying to write a computer code that simulates brain function on a standard machine, they have to use huge machines with enormous processing power to simulate only tiny portions of the brain.
Williams and his team say they can now take a different approach: "Instead of writing a computer program to simulate a brain or simulate some brain function, we’re actually looking to build some hardware based upon memristors that emulates brain-like functions," says Williams.
Such hardware could be used to improve things like facial recognition technology, and enable an appliance to essentially learn from experience, he says. In principle, this should also be thousands or millions of times more efficient than running a program on a digital computer.
The results of HP Labs teams findings will be published in a paper in today’s edition of Nature. As far as when we might see memristors actually being used in actual commercial devices, Williams says the limitations are more business oriented than technological.
Ultimately, the problem is going to be related to the time and effort involved in designing a memristor circuit, he says. "The money invested in circuit design is actually much larger than building fabs. In fact, you can use any fab to make these things right now, but somebody also has to design the circuits and there’s currently no memristor model. The key is going to be getting the necessary tools out into the community and finding a niche application for memristors. How long this will take is more of a business decision than a technological one."
Image: An atomic force microscope image of a simple circuit with 17 memristors lined up in a row. Each memristor has a bottom wire that contacts one side of the device and a top wire that contacts the opposite side. The devices act as ‘memory resistors’, with the resistance of each device depending on the amount of charge that has moved through each one. The wires in this image are 50 nm wide, or about 150 atoms in total width. Image courtesy of J. J. Yang, HP Labs.
Tags: research
After almost 20 years, math problem falls
<p><a href="http://www.physorg.com/news/2011-07-years-math-problem-falls.html%22%3EAfter almost 20 years, math problem falls</a></p>
<p>Mathematicians and engineers are often concerned with finding the minimum value of a particular mathematical function. That minimum could represent the optimal trade-off between competing criteria — between the surface area, weight and wind resistance of a car’s body design, for instance. In control theory, a minimum might represent a stable state of an electromechanical system, like an airplane in flight or a bipedal robot trying to keep itself balanced. There, the goal of a control algorithm might be to continuously steer the system back toward the minimum.</p>
<p>Mathematicians and engineers are often concerned with finding the minimum value of a particular mathematical function. That minimum could represent the optimal trade-off between competing criteria — between the surface area, weight and wind resistance of a car’s body design, for instance. In control theory, a minimum might represent a stable state of an electromechanical system, like an airplane in flight or a bipedal robot trying to keep itself balanced. There, the goal of a control algorithm might be to continuously steer the system back toward the minimum.</p>
sexta-feira, 15 de julho de 2011
quinta-feira, 14 de julho de 2011
Galaxy sized twist in time pulls violating particles back into line
<p><a href="http://www.physorg.com/news/2011-07-galaxy-sized-violating-particles-line.html%22%3EGalaxy sized twist in time pulls violating particles back into line</a></p>
<p>(PhysOrg.com) -- University of Warwick physicist has produced a galaxy sized solution which explains one of the outstanding puzzles of particle physics, while leaving the door open to the related conundrum of why different amounts of matter and antimatter seem to have survived the birth of our Universe.</p>
<p>(PhysOrg.com) -- University of Warwick physicist has produced a galaxy sized solution which explains one of the outstanding puzzles of particle physics, while leaving the door open to the related conundrum of why different amounts of matter and antimatter seem to have survived the birth of our Universe.</p>
quarta-feira, 13 de julho de 2011
Valetrônica: a nova eletrônica do grafeno
Valetrônica: a nova eletrônica do grafeno: "Como não há analogia entre a nascente valetrônica e a eletrônica baseada no silício, é difícil prever as possibilidades de exploração dos vales do grafeno. Mas os cientistas apostam alto."
Grafeno: até os defeitos são belos e funcionais
Grafeno: até os defeitos são belos e funcionais: "Além de propriedades imbatíveis quando puro, o grafeno apresenta alguns defeitos que bem poderiam ser chamados de virtudes."
Chip de safira faz fóton ter duas cores ao mesmo tempo
Chip de safira faz fóton ter duas cores ao mesmo tempo: "No mundo quântico, um fóton pode ser azul e amarelo ao mesmo tempo, sem se tornar verde. Ou vermelho e laranja, igualmente ao mesmo tempo, sem se tornar laranja."
Lente fotocromática elétrica muda de cor instantaneamente
Lente fotocromática elétrica muda de cor instantaneamente: "A nova lente de transição muda de cor reagindo à passagem de uma corrente elétrica - desta forma, a transição é virtualmente imediata."
terça-feira, 12 de julho de 2011
Macromolecules get time to shine
<p><a href="http://www.physorg.com/news/2011-07-supramolecules.html%22%3ESupramolecules get time to shine</a></p>
<p>(PhysOrg.com) -- What looks like a spongy ball wrapped in strands of yarn -- but a lot smaller -- could be key to unlocking better methods for catalysis, artificial photosynthesis or splitting water into hydrogen, according to Rice University chemists who have created a platform to analyze interactions between carbon nanotubes and a wide range of photoluminescent materials.</p>
<p>(PhysOrg.com) -- What looks like a spongy ball wrapped in strands of yarn -- but a lot smaller -- could be key to unlocking better methods for catalysis, artificial photosynthesis or splitting water into hydrogen, according to Rice University chemists who have created a platform to analyze interactions between carbon nanotubes and a wide range of photoluminescent materials.</p>
Antenas encolhem e se aproximam do limite teórico
Antenas encolhem e se aproximam do limite teórico: "Ela se parece com uma lente de contato, mas é uma antena flexível e muito miniaturizada - na verdade, muito próxima do limite de miniaturização de uma antena."
O Universo tem um eixo central de rotação?
O Universo tem um eixo central de rotação?: "Ao contrário do que propõe a teoria atual, novos dados sugerem que o Universo pode ter nascido girando, em um movimento de rotação cósmico."
segunda-feira, 11 de julho de 2011
New way to produce antimatter-containing atom discovered
<p><a href="http://www.physorg.com/news/2011-07-antimatter-containing-atom.html%22%3ENew way to produce antimatter-containing atom discovered</a></p>
<p>Physicists at the University of California, Riverside report that they have discovered a new way to create positronium, an exotic and short-lived atom that could help answer what happened to antimatter in the universe, why nature favored matter over antimatter at the universe's creation.</p>
<p>Physicists at the University of California, Riverside report that they have discovered a new way to create positronium, an exotic and short-lived atom that could help answer what happened to antimatter in the universe, why nature favored matter over antimatter at the universe's creation.</p>
domingo, 10 de julho de 2011
Automontagem molecular: vêm aí os chips que se constroem sozinhos
Automontagem molecular: vêm aí os chips que se constroem sozinhos: "Moléculas de polímeros agarram-se a pequenos pilares e se organizam de forma totalmente autônoma, formando todos os sete desenhos básicos considerados essenciais para a fabricação de circuitos eletrônicos."
Onde eletrônica, spintrônica e computação quântica se encontram
Onde eletrônica, spintrônica e computação quântica se encontram: "Interagindo com moléculas magnéticas individuais, cientistas alcançaram uma tríplice fronteira onde memória, lógica e lógica quântica podem ser integradas."
sábado, 9 de julho de 2011
New 'cooler' technology offers fundamental breakthrough in heat transfer for microelectronics
In this diagram of the Sandia Cooler, heat is transferred to the rotating cooling fins. Rotation of the cooling fins eliminates the thermal bottleneck typically associated with a conventional CPU cooler. (Diagram courtesy of Jeff Koplow)
(PhysOrg.com) -- Sandia National Laboratories has developed a new technology with the potential to dramatically alter the air-cooling landscape in computing and microelectronics, and lab officials are now seeking licensees in the electronics chip cooling field to license and commercialize the device.
MIT spinout unveils new more powerful direct-diode laser
Enlarge
(PhysOrg.com) -- TeraDiode, a spinout company from MIT and located nearby in Littleton, MA, has unveiled, a new powerful direct-diode laser capable of cutting all the way through steel up to half an inch thick at various speeds. The laser is based on technology developed by company co-founders Dr. Bien Chann and Dr. Robin Huang while still at MIT.
CERN lança iniciativa de hardware aberto
CERN lança iniciativa de hardware aberto: "O Hardware Aberto pretende fazer pelos equipamentos o que o conceito de Software Livre faz pelos programas de computador."
sexta-feira, 8 de julho de 2011
Free to download: July’s Physics World
By Matin Durrani

It is perhaps a little-known fact that Griffin – the main character in H G Wells’ classic novel The Invisible Man – was a physicist. In the 1897 book, Griffin explains how he quit medicine for physics and developed a technique that made himself invisible by reducing his body’s refractive index to match that of air.
While Wells’ novel is obviously a work of fiction, the quest for invisibility has made real progress in recent years – and is the inspiration for this month’s special issue of Physics World, which you can download for free via this link.
Kicking off the issue is Sidney Perkowitz, who takes us on a whistle-stop tour of invisibility through the ages – from its appearance in Greek mythology to camouflaging tanks on the battlefield – before bringing us up to date with recent scientific developments.
Ulf Leonhardt then takes a light-hearted look at the top possible applications of invisibility science. Hold on to your hats for invisibility cloaks, perfect lenses and the ultimate anti-wrinkle cream.
Some of these applications might be years away, but primitive invisibility cloaks have already been built, with two independent groups of researchers having recently created cloaks operating with visible light that can conceal centimetre-scale objects, including a paper clip, a steel wedge and a piece of paper. But as Wenshan Cai and Vladimir Shalaev explain, these cloaks only work under certain conditions, namely with polarized light, in a 2D configuration and with the cloak immersed in a high-refractive-index liquid. It seems that the holy grail of hiding macroscopic objects viewed from any angle using unpolarized visible light is still some way off.
The special issue ends with a look at something even more fantastic-sounding – the possibility of creating a cloak that works not just in space but in space–time. Although no such “event cloak” has yet been built, Martin McCall and Paul Kinsler outline the principles of how it would work and describe what might be possible with a macroscopic, fully functioning device that conceals events from view. These applications range from the far-fetched, such as the illusion of a Star Trek-style transporter, to the more mundane, such as controlling signals in an optical routing system.
But, hey, that’s enough of me banging on about the special issue. Download it for free now and find out for yourself. And don’t forget to let us know what you think by e-mailing us at pwld@iop.org or via our Facebook or Twitter pages.
P.S. If you’re a member of the Institute of Physics, you can in addition read the issue in digital form via this link, where you can also listen to, search, share, save, print, archive and even translate individual articles. How’s that for value?

It is perhaps a little-known fact that Griffin – the main character in H G Wells’ classic novel The Invisible Man – was a physicist. In the 1897 book, Griffin explains how he quit medicine for physics and developed a technique that made himself invisible by reducing his body’s refractive index to match that of air.
While Wells’ novel is obviously a work of fiction, the quest for invisibility has made real progress in recent years – and is the inspiration for this month’s special issue of Physics World, which you can download for free via this link.
Kicking off the issue is Sidney Perkowitz, who takes us on a whistle-stop tour of invisibility through the ages – from its appearance in Greek mythology to camouflaging tanks on the battlefield – before bringing us up to date with recent scientific developments.
Ulf Leonhardt then takes a light-hearted look at the top possible applications of invisibility science. Hold on to your hats for invisibility cloaks, perfect lenses and the ultimate anti-wrinkle cream.
Some of these applications might be years away, but primitive invisibility cloaks have already been built, with two independent groups of researchers having recently created cloaks operating with visible light that can conceal centimetre-scale objects, including a paper clip, a steel wedge and a piece of paper. But as Wenshan Cai and Vladimir Shalaev explain, these cloaks only work under certain conditions, namely with polarized light, in a 2D configuration and with the cloak immersed in a high-refractive-index liquid. It seems that the holy grail of hiding macroscopic objects viewed from any angle using unpolarized visible light is still some way off.
The special issue ends with a look at something even more fantastic-sounding – the possibility of creating a cloak that works not just in space but in space–time. Although no such “event cloak” has yet been built, Martin McCall and Paul Kinsler outline the principles of how it would work and describe what might be possible with a macroscopic, fully functioning device that conceals events from view. These applications range from the far-fetched, such as the illusion of a Star Trek-style transporter, to the more mundane, such as controlling signals in an optical routing system.
But, hey, that’s enough of me banging on about the special issue. Download it for free now and find out for yourself. And don’t forget to let us know what you think by e-mailing us at pwld@iop.org or via our Facebook or Twitter pages.
P.S. If you’re a member of the Institute of Physics, you can in addition read the issue in digital form via this link, where you can also listen to, search, share, save, print, archive and even translate individual articles. How’s that for value?
TrackBack
TrackBack URL for this entry:
http://www.iop.org/mt4/mt-tb.cgi/4154
http://www.iop.org/mt4/mt-tb.cgi/4154
quinta-feira, 7 de julho de 2011
Chips supercondutores poderão se tornar realidade
Chips supercondutores poderão se tornar realidade: "Descoberto o germânio supercondutor, um feito surpreendente porque esse elemento não era considerado um candidato para substituir o silício, cujos limites físicos de miniaturização estão se aproximando rapidamente."
Was the Space Shuttle a Mistake?
The program's benefits weren't worth the cost—and now the U.S. is in jeopardy of repeating the same mistake, says a leading space policy expert.
By John M. Logsdon
Forty years ago, I wrote an article for Technology Review titled "Shall We Build the Space Shuttle?" Now, with the 135th and final flight of the shuttle at hand, and the benefit of hindsight, it seems appropriate to ask a slightly different question—"Should We Have Built the Space Shuttle?"
After the very expensive Apollo effort, a low-cost space transportation system for both humans and cargo was seen as key to the future of the U.S. space program in the 1980s and beyond. So developing some form of new space launch system made sense as the major NASA effort for the 1970s, presuming the United States was committed to continuing space leadership. But it was probably a mistake to develop this particular space shuttle design, and then to build the future U.S. space program around it.
The selection in 1972 of an ambitious and technologically challenging shuttle design resulted in the most complex machine ever built. Rather than lowering the costs of access to space and making it routine, the space shuttle turned out to be an experimental vehicle with multiple inherent risks, requiring extreme care and high costs to operate safely. Other, simpler designs were considered in 1971 in the run-up to President Nixon's final decision; in retrospect, taking a more evolutionary approach by developing one of them instead would probably have been a better choice.
The shuttle does, of course, leave behind a record of significant achievements. It is a remarkably capable vehicle. It has carried a variety of satellites and spacecraft to low-Earth orbit. It serviced satellites in orbit, most notably during the five missions to the Hubble Space Telescope. On a few flights, the shuttle carried in its payload bay a small pressurized laboratory, called Spacelab, which provided research facilities for a variety of experiments. That laboratory was a European contribution to the space shuttle program. With Spacelab and the Canadian-provided robotic arm used to grab and maneuver payloads, the shuttle set the precedent for intimate international cooperation in human spaceflight. The shuttle kept American and allied astronauts flying in space and opened up the spaceflight experience to scientists and engineers, not just test pilots. The space shuttle was a source of considerable pride for the United States; images of a shuttle launch are iconic elements of American accomplishment and technological leadership.
But were these considerable benefits worth the $209.1 billion (in 2010 dollars) that the program cost? I doubt it. The shuttle was much more expensive than anyone anticipated at its inception. Then-NASA administrator James Fletcher told Congress in 1972 that the shuttle would cost $5.15 billion to develop and could be operated at a cost of $10.5 million per flight. NASA only slightly overran development costs, which is normal for a challenging technological effort, but the cost of operating the shuttle turned out to be at least 20 times higher than was projected at the program's start. The original assumption was that the lifetime of the shuttle would be between 10 and 15 years. By operating the system for 30 years, with its high costs and high risk, rather than replacing it with a less expensive, less risky second-generation system, NASA compounded the original mistake of developing the most ambitious version of the vehicle. The shuttle's cost has been an obstacle to NASA starting other major projects.
But replacing the shuttle turned out to be difficult because of its intimate link to the construction of the space station. President Reagan approved the development of a space station in 1984, but the final design of what became the International Space Station (ISS) was not chosen until 1993. The first shuttle-launched element of the ISS was not orbited until 1998. It took 13 years to complete the ISS. Without the shuttle, construction of the ISS would have been impossible, leaving the U.S. with little choice but to keep the shuttle flying to finish the job. This necessity added almost two decades and billions of dollars of cost to the shuttle's operation. Whether the shuttle is ultimately viewed as successful will in large part be tied to the payoffs from the space station that it made possible. It will be years before those payoffs can be measured.
I have previously written that it was a policy mistake to choose the space shuttle as the centerpiece of the nation's post-Apollo space effort without agreeing on its goals (Science, May 30, 1986).
Today we are in danger of repeating that mistake, given Congressional and industry pressure to move rapidly to the development of a heavy lift launch vehicle without a clear sense of how that vehicle will be used. Important factors in the decision to move forward with the shuttle were the desire to preserve Apollo-era NASA and contractor jobs, and the political impact of program approval on the 1972 presidential election. Similar pressures are influential today. If we learn anything from the space shuttle experience, it should be that making choices with multidecade consequences on such short-term considerations is poor public policy.
John M. Logsdon is professor emeritus at the Space Policy Institute, George Washington University, and author of John F. Kennedy and the Race to the Moon. In 2003, he was a member of the Columbia Accident Investigation Board.
Copyright Technology Review 2011.
After the very expensive Apollo effort, a low-cost space transportation system for both humans and cargo was seen as key to the future of the U.S. space program in the 1980s and beyond. So developing some form of new space launch system made sense as the major NASA effort for the 1970s, presuming the United States was committed to continuing space leadership. But it was probably a mistake to develop this particular space shuttle design, and then to build the future U.S. space program around it.
The selection in 1972 of an ambitious and technologically challenging shuttle design resulted in the most complex machine ever built. Rather than lowering the costs of access to space and making it routine, the space shuttle turned out to be an experimental vehicle with multiple inherent risks, requiring extreme care and high costs to operate safely. Other, simpler designs were considered in 1971 in the run-up to President Nixon's final decision; in retrospect, taking a more evolutionary approach by developing one of them instead would probably have been a better choice.
The shuttle does, of course, leave behind a record of significant achievements. It is a remarkably capable vehicle. It has carried a variety of satellites and spacecraft to low-Earth orbit. It serviced satellites in orbit, most notably during the five missions to the Hubble Space Telescope. On a few flights, the shuttle carried in its payload bay a small pressurized laboratory, called Spacelab, which provided research facilities for a variety of experiments. That laboratory was a European contribution to the space shuttle program. With Spacelab and the Canadian-provided robotic arm used to grab and maneuver payloads, the shuttle set the precedent for intimate international cooperation in human spaceflight. The shuttle kept American and allied astronauts flying in space and opened up the spaceflight experience to scientists and engineers, not just test pilots. The space shuttle was a source of considerable pride for the United States; images of a shuttle launch are iconic elements of American accomplishment and technological leadership.
But were these considerable benefits worth the $209.1 billion (in 2010 dollars) that the program cost? I doubt it. The shuttle was much more expensive than anyone anticipated at its inception. Then-NASA administrator James Fletcher told Congress in 1972 that the shuttle would cost $5.15 billion to develop and could be operated at a cost of $10.5 million per flight. NASA only slightly overran development costs, which is normal for a challenging technological effort, but the cost of operating the shuttle turned out to be at least 20 times higher than was projected at the program's start. The original assumption was that the lifetime of the shuttle would be between 10 and 15 years. By operating the system for 30 years, with its high costs and high risk, rather than replacing it with a less expensive, less risky second-generation system, NASA compounded the original mistake of developing the most ambitious version of the vehicle. The shuttle's cost has been an obstacle to NASA starting other major projects.
But replacing the shuttle turned out to be difficult because of its intimate link to the construction of the space station. President Reagan approved the development of a space station in 1984, but the final design of what became the International Space Station (ISS) was not chosen until 1993. The first shuttle-launched element of the ISS was not orbited until 1998. It took 13 years to complete the ISS. Without the shuttle, construction of the ISS would have been impossible, leaving the U.S. with little choice but to keep the shuttle flying to finish the job. This necessity added almost two decades and billions of dollars of cost to the shuttle's operation. Whether the shuttle is ultimately viewed as successful will in large part be tied to the payoffs from the space station that it made possible. It will be years before those payoffs can be measured.
I have previously written that it was a policy mistake to choose the space shuttle as the centerpiece of the nation's post-Apollo space effort without agreeing on its goals (Science, May 30, 1986).
Today we are in danger of repeating that mistake, given Congressional and industry pressure to move rapidly to the development of a heavy lift launch vehicle without a clear sense of how that vehicle will be used. Important factors in the decision to move forward with the shuttle were the desire to preserve Apollo-era NASA and contractor jobs, and the political impact of program approval on the 1972 presidential election. Similar pressures are influential today. If we learn anything from the space shuttle experience, it should be that making choices with multidecade consequences on such short-term considerations is poor public policy.
John M. Logsdon is professor emeritus at the Space Policy Institute, George Washington University, and author of John F. Kennedy and the Race to the Moon. In 2003, he was a member of the Columbia Accident Investigation Board.
terça-feira, 5 de julho de 2011
Sulphur Breakthrough Significantly Boosts Lithium Battery Capacity
Trapping sulphur particles in graphene cages produces a cathode material that could finally make lithium batteries capable of powering electric cars
By kfc

But good as they are, lithium batteries are not up to the demanding task of powering the next generation of electric vehicles. They just don't have enough juice or the ability to release it quickly over and over again.
The problem lies with the cathodes in these batteries. The specific capacities of the anode materials in lithium batteries are 370 mAh/g for graphite and 4200 mAh/g for silicon. By contrast, the cathode specific capacities are 170 mAh/g for LiFePO4 and only 150mAh/g for layered oxides.
So the way forward is clear: find a way to improve the cathode's specific capacity while maintaining all the other characteristics that batteries require, such as a decent energy efficiency and a good cycle life.
Today, Hailiang Wang and buddies at Stanford University say they've achieved a significant step towards this goal using sulphur as the cathode material of choice.
Chemists have known for many years that sulphur has potential: it has a theoretical specific capacity of 1672 mAh/g. But it also has a number of disadvantages, not least of these is the fact that sulphur is a poor conductor. On top of this, polysulphides tend to dissolve and wash away in many electrolytes while sulphur tends to swell during the discharge cycle causing it to crumble.
But Wang and co say they've largely overcome these problems using a few clever nanoengineering techniques to improve the performance. Their trick is to create submicron sulphur particles and coat them in a kind of plastic called polyethyleneglycol or PEG. This traps polysulphides and prevents them from washing away.
Next, Wang and co wrap the coated sulphur particles in a graphene cage. The interaction between carbon and sulphur renders the particles electrically conducting and also supports the particles as they swell and shrink during each charging cycle.
The result is a cathode that retains a specific capacity of more than 600 mAh/g over 100 charging cycles.
That's impressive. Such a cathode would immediately lead to rechargeable lithium batteries with a much higher energy density than is possible today. "It is worth noting that the graphene-sulfur composite could be coupled with silicon based anode materials for rechargeable batteries with significantly higher energy density than currently possible," say Wang and co
But there is more work ahead. Even though the material maintains a high specific capacity over 100 cycles, Wang and co say the capacity drops by 15 per cent in the process.
So they will be hoping, and indeed expecting, to improve on this as they further optimise the material.
The next step then is to create a working battery out of this stuff. Wang and co say they plan to couple it to a pre-lithiated silicon based anode to achieve this.
If it all works out (and that's a significant 'if'). your next car could be powered by Li-S batteries.
Ref:arxiv.org/abs/1107.0109: Graphene-Wrapped Sulfur Particles as a Rechargeable Lithium-Sulfur-Battery Cathode Material with High Capacity and Cycling Stability
sábado, 2 de julho de 2011
A Smarter, Stealthier Botnet
The "most technologically sophisticated" malware uses clever communications tricks and encryption to avoid disruption.
By David Talbot
A new kind of botnet—a network of malware-infected PCs—behaves less like an army and more like a decentralized terrorist network, experts say. It can survive decapitation strikes, evade conventional defenses, and even wipe out competing criminal networks.
The botnet's resilience is due to a super-sophisticated piece of malicious software known as TDL-4, which in the first three months of 2011 infected more than 4.5 million computers around the world, about a third of them in the United States.
The emergence of TDL-4 shows that the business of installing malicious code on PCs is thriving. Such code is used to conduct spam campaigns and various forms of theft and fraud, such as siphoning off passwords and other sensitive data. It's also been used in the billion-dollar epidemic of fake anti-virus scams.
"Ultimately TDL-4 is simply a tool for maintaining and protecting a compromised platform for fraud," says Eric Howes, malware analyst for GFI Software, a security company. "It's part of the black service economy for malware, which has matured considerably over the past five years and which really needs a lot more light shed on it."
Unlike other botnets, the TDL-4 network doesn't rely on a few central "command-and-control" servers to pass along instructions and updates to all the infected computers. Instead, computers infected with TDL-4 pass along instructions to one another using public peer-to-peer networks. This makes it a "decentralized, server-less botnet," wrote Sergey Golovanov, a malware researcher at the Moscow-based security company Kaspersky Lab, on this blog describing the new threat.
"The owners of TDL are essentially trying to create an 'indestructible' botnet that is protected against attacks, competitors, and antivirus companies," Golovanov wrote. He added that it "is one of the most technologically sophisticated, and most complex-to-analyze malware."
The TDL-4 botnet also breaks new ground by using an encryption algorithm that hides its communications from traffic-analysis tools. This is an apparent response to efforts by researchers to discover infected machines and disable botnets by monitoring their communication patterns, rather than simply identifying the presence of the malicious code.
Demonstrating that there is no honor among malicious software writers, TDL-4 scans for and deletes 20 of the most common forms of competing malware, so it can keep infected machines all to itself. "It's interesting to mention that the features are generally oriented toward achieving perfect stealth, resilience, and getting rid of 'competitor' malware," says Costin Raiu, another malware researcher at Kaspersky.
Distributed by criminal freelancers called affiliates, who get paid between $20 and $200 for every 1,000 infected machines, TDL-4 lurks on porn sites and some video and file-storage services, among other places, where it can be automatically installed using vulnerabilities in a victim's browser or operating system.
Once TDL-4 infects a computer, it downloads and installs as many as 30 pieces of other malicious software—including spam-sending bots and password-stealing programs. "There are other malware-writing groups out there, but the gang behind [this one] is specifically targeted on delivering high-tech malware for profit," says Raiu.
Copyright Technology Review 2011.
The botnet's resilience is due to a super-sophisticated piece of malicious software known as TDL-4, which in the first three months of 2011 infected more than 4.5 million computers around the world, about a third of them in the United States.
The emergence of TDL-4 shows that the business of installing malicious code on PCs is thriving. Such code is used to conduct spam campaigns and various forms of theft and fraud, such as siphoning off passwords and other sensitive data. It's also been used in the billion-dollar epidemic of fake anti-virus scams.
"Ultimately TDL-4 is simply a tool for maintaining and protecting a compromised platform for fraud," says Eric Howes, malware analyst for GFI Software, a security company. "It's part of the black service economy for malware, which has matured considerably over the past five years and which really needs a lot more light shed on it."
Unlike other botnets, the TDL-4 network doesn't rely on a few central "command-and-control" servers to pass along instructions and updates to all the infected computers. Instead, computers infected with TDL-4 pass along instructions to one another using public peer-to-peer networks. This makes it a "decentralized, server-less botnet," wrote Sergey Golovanov, a malware researcher at the Moscow-based security company Kaspersky Lab, on this blog describing the new threat.
"The owners of TDL are essentially trying to create an 'indestructible' botnet that is protected against attacks, competitors, and antivirus companies," Golovanov wrote. He added that it "is one of the most technologically sophisticated, and most complex-to-analyze malware."
The TDL-4 botnet also breaks new ground by using an encryption algorithm that hides its communications from traffic-analysis tools. This is an apparent response to efforts by researchers to discover infected machines and disable botnets by monitoring their communication patterns, rather than simply identifying the presence of the malicious code.
Demonstrating that there is no honor among malicious software writers, TDL-4 scans for and deletes 20 of the most common forms of competing malware, so it can keep infected machines all to itself. "It's interesting to mention that the features are generally oriented toward achieving perfect stealth, resilience, and getting rid of 'competitor' malware," says Costin Raiu, another malware researcher at Kaspersky.
Distributed by criminal freelancers called affiliates, who get paid between $20 and $200 for every 1,000 infected machines, TDL-4 lurks on porn sites and some video and file-storage services, among other places, where it can be automatically installed using vulnerabilities in a victim's browser or operating system.
Once TDL-4 infects a computer, it downloads and installs as many as 30 pieces of other malicious software—including spam-sending bots and password-stealing programs. "There are other malware-writing groups out there, but the gang behind [this one] is specifically targeted on delivering high-tech malware for profit," says Raiu.
Making Speedy Memory Chips Reliable
By Katherine Bourzac
IBM researchers have developed a programming trick that makes it possible to more reliably store large amounts of data using a promising new technology called phase-change memory. The company hopes to start integrating this storage technology into commercial products, such as servers that process data for the cloud, in about five years.
Like flash memory, commonly found in cell phones, phase-change memory is nonvolatile. That means it doesn't require any power to store the data. And it can be accessed rapidly for fast boot-ups in computers and more efficient operation in general. Phase-change memory has a speed advantage over flash, and Micron and Samsung are about to bring out products that will compete with flash in some mobile applications.
These initial products will use memory cells that store one bit each. But for phase-change memory to be cost-competitive for broader applications, it will need to achieve higher density, storing multiple bits per cell.
Greater density is necessary for IBM to achieve its goal of developing phase-change memory for high-performance systems such as servers that process and store Internet data much faster.
The IBM work announced today offers a solution. In the past, researchers haven't been able to make a device that uses multiple bits per cell that works reliably over months and years. That's because of the properties of the phase-change materials used to store the data. Scientists at IBM Research in Zurich have developed a software trick that allows them to compensate for this.
Each cell in these data-storage arrays is made up of a small spot of phase-change materials sandwiched between two electrodes. By applying a voltage across the electrodes, the material can be switched to any number of states along a continuum from totally unstructured to highly crystalline. The memory is read out by using another electrical pulse to measure the resistance of the material, which is much lower in the crystalline state.
To make multibit memory cells, the IBM group picked four different levels of electrical resistance. The trouble is that over time, the electrons in the phase-change cells tend to drift around, and the resistance changes, corrupting the data. The IBM group has shown that they can encode the data in such a way that when it's read out, they can correct for drift-based errors and get the right data.
The IBM group has shown that error-correcting code can be used to reliably read out data from a 200,000-cell phase-change memory array after a period of six months. "That's not gigabits, like flash, but it's impressive," says Eric Pop, professor of electrical engineering and computer sciences at the University of Illinois at Urbana-Champaign. "They're using a clever encoding scheme that seems to prolong the life and reliability of phase-change memory."
For commercial products, that reliability timescale needs to come up to 10 years, says Victor Zhirnov, director of special projects at the Semiconductor Research Corporation. IBM says it can get there. "Electrical drift in these materials is mostly problematic in the first microseconds and minutes after programming," says Harris Pozidis, manager of memory and probe technologies at IBM Research in Zurich. The problem of drift can be statistically accounted for in the IBM coding scheme over whatever timeframe is necessary, says Pozidis, because it occurs at a known rate.
But phase-change memory won't be broadly adapted until power consumption can be checked, says Zhirnov. It still takes much too much energy to flip the bits in these arrays. That's due to the way the electrodes are designed, and many researchers are working on the problem. This spring, Pop's group at the University of Illinois demonstrated storage arrays that use carbon nanotubes to encode phase-change memory cells with 100 times less power.
Copyright Technology Review 2011.Like flash memory, commonly found in cell phones, phase-change memory is nonvolatile. That means it doesn't require any power to store the data. And it can be accessed rapidly for fast boot-ups in computers and more efficient operation in general. Phase-change memory has a speed advantage over flash, and Micron and Samsung are about to bring out products that will compete with flash in some mobile applications.
These initial products will use memory cells that store one bit each. But for phase-change memory to be cost-competitive for broader applications, it will need to achieve higher density, storing multiple bits per cell.
Greater density is necessary for IBM to achieve its goal of developing phase-change memory for high-performance systems such as servers that process and store Internet data much faster.
The IBM work announced today offers a solution. In the past, researchers haven't been able to make a device that uses multiple bits per cell that works reliably over months and years. That's because of the properties of the phase-change materials used to store the data. Scientists at IBM Research in Zurich have developed a software trick that allows them to compensate for this.
Each cell in these data-storage arrays is made up of a small spot of phase-change materials sandwiched between two electrodes. By applying a voltage across the electrodes, the material can be switched to any number of states along a continuum from totally unstructured to highly crystalline. The memory is read out by using another electrical pulse to measure the resistance of the material, which is much lower in the crystalline state.
To make multibit memory cells, the IBM group picked four different levels of electrical resistance. The trouble is that over time, the electrons in the phase-change cells tend to drift around, and the resistance changes, corrupting the data. The IBM group has shown that they can encode the data in such a way that when it's read out, they can correct for drift-based errors and get the right data.
The IBM group has shown that error-correcting code can be used to reliably read out data from a 200,000-cell phase-change memory array after a period of six months. "That's not gigabits, like flash, but it's impressive," says Eric Pop, professor of electrical engineering and computer sciences at the University of Illinois at Urbana-Champaign. "They're using a clever encoding scheme that seems to prolong the life and reliability of phase-change memory."
For commercial products, that reliability timescale needs to come up to 10 years, says Victor Zhirnov, director of special projects at the Semiconductor Research Corporation. IBM says it can get there. "Electrical drift in these materials is mostly problematic in the first microseconds and minutes after programming," says Harris Pozidis, manager of memory and probe technologies at IBM Research in Zurich. The problem of drift can be statistically accounted for in the IBM coding scheme over whatever timeframe is necessary, says Pozidis, because it occurs at a known rate.
But phase-change memory won't be broadly adapted until power consumption can be checked, says Zhirnov. It still takes much too much energy to flip the bits in these arrays. That's due to the way the electrodes are designed, and many researchers are working on the problem. This spring, Pop's group at the University of Illinois demonstrated storage arrays that use carbon nanotubes to encode phase-change memory cells with 100 times less power.
sexta-feira, 1 de julho de 2011
F1 gearing up for new 'green' and 'cool' future
July 1st, 2011 in Technology / Energy & Green Tech
Paul di Resta of Great Britain and Force India drives during the Canadian Formula One Grand Prix at the Circuit Gilles Villeneuve on June 12 in Montreal, Canada. Formula One is preparing itself for a period of progressive change towards greater fuel efficiency, clearer 'green' credentials and much bigger popularity with car makers and racing fans.
Formula One is preparing itself for a period of progressive change towards greater fuel efficiency, clearer 'green' credentials and much bigger popularity with car makers and racing fans.
This vision of a future when 'green equals cool' and accelerates the sport's brand vision towards a more eco-friendly set of values suitable for a new age of low carbon emissions was spelt out by team chiefs this week.
Mercedes' boss Ross Brawn and McLaren's Martin Whitmarsh were members of a panel that met F1 supporters at 'meet the fans' forum organised by the Formula One Teams' Organisation (FOTA) at the McLaren headquarters on Thursday.
Referring to the introduction of new 1.6-litre V6 turbo-charged engines in 2014, Brawn said: "It's not only about the fact that the new engine is going to be more efficient in itself.
"It's the message it gives -- that it's cool to have a really efficient engine and race on a lot less fuel."
For decades, Formula One has been associated with huge levels of power, high levels of noise and fears of equally high levels of pollution, mostly associated with a perceived need to give the sport's fans a deafening experience of glorious, loud and penetrative engine performance.
Now, according to the new generation of team chiefs who are leading the forward march for the sport, those days are drawing to a close. The new engines, and their associated hybrid systems, were approved by the sport's governing body, the International Motoring Federation (FIA) on Wednesday.
The 'new age of F1' will usher in a fuel efficiency improvement of at least 35 per cent, energy recovery systems, fuel restrictions, maximum revs of up to 15,000 rpm (down from 18,000) and an overall power ceiling of around 750 bhp.
"We're setting dramatic targets for reducing the amount of fuel we race with - 30 per cent, 40 per cent and even 50 per cent less than what we're racing on now, but still with the same power and the same excitement," explained Brawn.
He went on to explain that the new F1 vision was designed to chime with the efforts of modern car makers in world of climate change, diminishing oil supplies and rising fuel prices.
But this progressive thinking was not adopted easily by the sport and faced severe opposition from many stake-holders including commercial ring-master Bernie Ecclestone, who had argued that the sport needed the noise developed by engines revving to 18,000 rpm.
Brawn added: "You're not going to get manufacturers coming in with the normally aspirated V8 we have now. The new engine creates opportunity for manufacturers to come in -- and that's a vital reason why we need a new engine with a more relevant specification for the manufacturers."
Whitmarsh was asked about the sport's future with free-to-air television and said that all the teams believed in it.
"All the FOTA teams believe in free-to-air," he said. "There will be parts of the market where there will be some differentiated service offered, but F1 teams are creating brand exposure and all the names we have on our cars require us to have a large audience.
"Our current contract requires it remains on free-to-air and the teams are going to safeguard our interests and those of the fans in this regard, but it isn't as simple as we must stay free-to-air and we must stay away from pay per view.
"We have to embrace (the understanding) that media is really multi-faceted. We have to make sure there is a mass free entry to be able to see Grands Prix, but there's an awful lot of people who want a lot more information than you're going to get on free-to-air."
The future of F1's broadcasting arrangements has been under scrutiny in recent weeks following interest from Rupert Murdoch's SKY organisation and reports suggesting that, in Britain, the BBC was not prepared to continue beyond its current contract.
(c) 2011 AFP
"F1 gearing up for new 'green' and 'cool' future." July 1st, 2011. http://www.physorg.com/news/2011-07-f1-gearing-green-cool-future.html
Assinar:
Comentários (Atom)
