sexta-feira, 30 de dezembro de 2011

What if electric cars were better?

Improving the energy density of batteries is the key to mass-market electric vehicles.

Electric vehicles are still too expensive and have too many limitations to compete with regular cars, except in a few niche markets. Will that ever change? The answer has everything to do with battery technology. Batteries carrying more charge for a lower price could extend the range of electric cars from today's 70 miles to hundreds of miles, effectively challenging the internal-combustion motor.


To get there, many experts agree, a major shift in battery technology may be needed. Electric vehicles such as the all-electric Nissan Leaf and the Chevrolet Volt, a plug-in hybrid from GM, rely on larger versions of the lithium-ion batteries that power smart phones, iPads, and ultrathin laptops. Such gadgets are possible only because lithium-ion batteries have twice the energy density of the nickel–metal hydride batteries used in the brick-size mobile phones and other bulky consumer electronics of the 1980s.

Using lithium-ion batteries, companies like Nissan, which has sold 20,000 Leafs globally (the car is priced at $33,000 in the U.S.), are predicting that they've already hit upon the right mix of vehicle range and sticker price to satisfy many commuters who drive limited distances.

The problem, however, is that despite several decades of optimization, lithium-ion batteries are still expensive and limited in performance, and they will probably not get much better. Assembled battery packs for a vehicle like the Volt cost roughly $10,000 and deliver about 40 miles before an internal-combustion engine kicks in to extend the charge. The battery for the Leaf costs about $15,000 (according to estimates from the Department of Energy) and delivers about 70 miles of driving, depending on various conditions. According to an analysis by the National Academy of Sciences, plug-in hybrid electric vehicles with a 40-mile electric range are "unlikely" to be cost competitive with conventional cars before 2040, assuming gasoline prices of $4 per gallon.

Estimates of the cost of assembled lithium-ion battery packs vary widely (see "Will Electric Vehicles Finally Succeed?"). The NAS report put the cost at about $625 to $850 per kilowatt-hour of energy; a Volt-like car requires a battery capacity of 16 kilowatts. But the bottom line is that batteries need to get far cheaper and provide far greater range if electric vehicles are ever to become truly popular.

Whether that's possible with conventional lithium-ion technology is a matter of debate. Though some involved in battery manufacturing say the technology still has room for improvement, the NAS report, for one, notes that although lithium-ion batteries have been getting far cheaper over the last decade, those reductions seem to be leveling off. It concludes that even under optimistic assumptions, lithium-ion batteries are likely to cost around $360 per kilowatt-hour in 2030.

The U.S. Department of Energy, however, has far more ambitious goals for electric-vehicle batteries, aiming to bring the cost down to $125 per kilowatt-hour by 2020. For that, radical new technologies will probably be necessary. As part of its effort to encourage battery innovation, the DOE's ARPA-E program has funded 10 projects, most of them involving startup companies, to find "game-changing technologies" that will deliver an electric car with a range of 300 to 500 miles.

The department has put $57 million toward efforts to develop a number of very different technologies, including metal-air, lithium-sulfur, and solid-state batteries. Among the funding recipients is Pellion Technologies, a Cambridge, Massachusetts-based startup working on magnesium-ion batteries that could provide twice the energy density of lithium-ion ones; another ARPA-E-funded startup, Sion Power in Tucson, Arizona, promises a lithium-sulfur battery that has an energy density three times that of conventional lithium-ion batteries and could power electric vehicles for more than 300 miles.

The ARPA-E program is meant to support high-risk projects, so it's hard to know whether any of the new battery technologies will succeed. But if the DOE meets its ambitious goals, it will truly change the economics of electric cars. Improving the energy density of batteries has already changed how we communicate. Someday it could change how we commute.
Copyright Technology Review 2011.

sábado, 24 de dezembro de 2011

sábado, 10 de dezembro de 2011

Fluxo Escuro pode ser a prova da existência de outro universo

Fluxo Escuro pode ser a prova da existência de outro universo: Cientistas acreditam ter encontrado as provas da existência de outro universo. E, para formar uma trindade com a Matéria Escura e com a Energia Escura, ambas responsáveis por mais de 95% do nosso universo, os astrônomos batizaram essa nova evidência de Fluxo Escuro.

Fluxo Escuro pode ser a prova da existência de outro universo

Fluxo Escuro pode ser a prova da existência de outro universo: Cientistas acreditam ter encontrado as provas da existência de outro universo. E, para formar uma trindade com a Matéria Escura e com a Energia Escura, ambas responsáveis por mais de 95% do nosso universo, os astrônomos batizaram essa nova evidência de Fluxo Escuro.

Leis da Física variam ao longo do Universo

Leis da Física variam ao longo do Universo: Um dos mais queridos princípios da ciência - a constância das leis da física - pode não ser verdadeiro, uma conclusão com potencial para alterar completamente a forma como compreendemos o Universo.

terça-feira, 29 de novembro de 2011

domingo, 20 de novembro de 2011

Leis da Física variam ao longo do Universo

Leis da Física variam ao longo do Universo: Um dos mais queridos princípios da ciência - a constância das leis da física - pode não ser verdadeiro, uma conclusão com potencial para alterar completamente a forma como compreendemos o Universo.

domingo, 11 de setembro de 2011

quinta-feira, 8 de setembro de 2011

‘Breakthrough’ Technique for Yield-Strength Testing

From American Machinist

Nanovea, a developer of mechanical testing and coordinated measuring systems, introduced a patent-pending method for measuring yield strength through indentation. It calls the Nanovea Mechanical Tester a “breakthrough” that will ultimately replace traditional tensile testing machines.

The yield strength (or “yield point”) of a material is determined by identifying the stress point at which plastic deformation begins. Before reaching the yield point a material will deform elastically, but it returns to its original shape when stress is removed. It is a critical data element for understanding properties of advanced materials in industries like biomedical, microelectronics, and energy, among others.

Typically, yield strength is determined using a tensile testing machine, a large instrument that requires high strength to pull apart metal, plastic, or other materials. In addition to the capital investment in the machine, the process consumes a lot of material, and involves a lot of effort, including sample preparation. Even so, it may be impossible to test yield strength on small samples and localized areas.

Yield strength data is easy to determine using Nanovea’s Mechanical Tester in indentation mode, with a cylindrical flat tip. The indentation test has been used for hardness and elastic modulus measurements, but it has been difficult to link macro tensile properties to what was measured during an indentation test. Many studies measuring with spherical tips have allowed stress-strain curves, but these never have been able to give reliable tensile yield strength data that corresponds directly to macro tensile data.

The Nanovea method, using a cylindrical flat tip, returns a yield-strength directly comparable to what is measured by traditional means. The assumption is that the load per surface area at which the cylindrical flat tip penetrates, at increased speed, is directly linked to the load versus surface area at which the material starts flowing in a tensile mode test. Therefore, reliable yield-strength results can be never gathered on numerous materials for which it has never before been obtainable.

"This is just another addition, on a long and growing list, to what can be tested with our mechanical tester," according to Nanovea CEO Pierre Leroux. While this specific test is a breakthrough of great importance, it is also evidence that the Nanovea Mechanical Tester has the widest testing capability of any mechanical testing system.

segunda-feira, 5 de setembro de 2011

Menor motor molecular elétrico tem apenas uma molécula

Menor motor molecular elétrico tem apenas uma molécula: O desenvolvimento faz parte de uma nova classe de dispositivos que poderão ser usados em aplicações que vão da medicina à engenharia.

Special report highlights 'greatest hits' of scientific supercomputing

        <p><a href="http://www.physorg.com/news/2011-09-special-highlights-greatest-scientific-supercomputing.html%22%3ESpecial report highlights 'greatest hits' of scientific supercomputing</a></p>
        <p>In 2007, a report that concluded that the Earth was warming, probably as a result of human activities, resulted in a share of the Nobel Peace Prize. The United Nations Intergovernmental Panel on Climate Change's next assessment, expected in 2014, once again includes simulation data generated from DOE leadership supercomputers, this time at Oak Ridge and Argonne national laboratories. Next a team of researchers, led by Warren Washington of the National Center for Atmospheric Research, will use a 2011 allocation of 110 million processor hours at Argonne and Oak Ridge to begin the generation of the largest treasure trove of climate data to date.</p>
      

sábado, 20 de agosto de 2011

Memória de vidro guarda informações em 5 dimensões

Memória de vidro guarda informações em 5 dimensões: Pesquisadores criaram um vidro nanoestruturado que pode funcionar como uma memória óptica e ainda reduzir o custo da microscopia de alta resolução e das imagens médicas.

sexta-feira, 19 de agosto de 2011

Luz se propaga como se o espaço tivesse sumido

Redação do Site Inovação Tecnológica - 09/08/2011

O espaço sumiu
Um grupo internacional de cientistas construiu uma nanoestrutura óptica que permite que a luz passe através dela sem nenhuma mudança de fase.

É como se o meio no qual a luz está viajando não existisse no espaço.

Se já é difícil para a intuição humana imaginar a eternidade - o não-tempo, que é muito diferente de um tempo muito longo - é praticamente impossível abstrair inteiramente do espaço, imaginando algo que não esteja em lugar nenhum.

Foi mais ou menos isso que os pesquisadores fizeram, criando um "não-espaço" para que a luz viaje sem qualquer distorção - e não imagine que seja algo tão simples quanto o vácuo, que está sempre espacialmente circunscrito.

E o dispositivo poderá ter aplicações práticas, podendo ser útil no campo da optoeletrônica, por exemplo, como uma forma de transportar sinais ópticos evitando qualquer distorção.

Metamaterial fotônico
Sempre que a luz viaja, qualquer que seja o meio que ela esteja atravessando, ela sofre uma mudança de fase, na medida que suas oscilações individuais se tornam defasadas umas em relação às outras.
Em algumas aplicações ópticas - por exemplo, nos interferômetros - essas variações de fase causam uma dispersão de frequências, levando a distorções de fase que, em última instância, reduzem a qualidade do sinal.

Agora os pesquisadores descobriram uma forma de controlar a dispersão da luz, usando um metamaterial, o mesmo tipo de estrutura artificial usada para fazer os mantos de invisibilidade.
O aparelho inclui cristais fotônicos com um índice negativo de refração, algo que não ocorre na natureza, mas que se tornou bastante comum com o advento dos metamateriais.
Esses cristais fotônicos artificiais são alternados com cristais comuns, com índice de refração positiva. Cada camada tem cerca de dois micrômetros de espessura.

Luz se propaga como se o espaço tivesse sumido
Esquema do comportamento da luz no interior do metamaterial, contido dentro de um interferômetro Mach-Zehnder. [Imagem: UCL]

A luz que atravessa o material através do cristal fotônico flui na direção oposta do fluxo de energia.
O resultado é que a fase da luz continua oscilando quando sai do dispositivo, mas com uma mudança de fase de soma zero - o que equivale dizer, sem alteração de fase, como se a luz não tivesse atravessado meio nenhum.

Retardo de fase zero
"O que nós vimos é que a luz se dispersa através do material como se o espaço interior não estivesse lá," comenta o Dr. Serdar Kocaman, da Universidade de Colúmbia. "A fase oscilatória da onda eletromagnética nem mesmo avança como se estivesse no vácuo - é isso o que chamamos de retardo de fase zero."

O dispositivo foi fabricado usando pastilhas de silício, como as usadas na fabricação de processadores e outros chips, o que significa que ele poderá ser integrado em circuitos optoeletrônicos.

"Nós agora podemos controlar o fluxo da luz, a coisa mais rápida que conhecemos," comenta Chee Wei Wong, principal autor da descoberta. "Isso pode permitir [a criação de] feixes de luz autofocantes, antenas altamente direcionais, e até mesmo uma outra abordagem para encobrir ou ocultar objetos."

Bibliografia:
Zero phase delay in negative-refractive-index photonic crystal superlattices
S. Kocaman, M. S. Aras, P. Hsieh, J. F. McMillan, C. G. Biris, N. C. Panoiu, M. B. Yu, D. L. Kwong, A. Stein, C. W. Wong
Nature Photonics
Vol.: 5, Pages: 499-505 (2011)
DOI: 10.1038/nphoton.2011.129

Fonte:  Site Inovação Tecnológica- www.inovacaotecnologica.com.br URL: http://www.inovacaotecnologica.com.br/noticias/noticia.php?artigo=luz-propaga-espaco-tivesse-sumido

The Evolution of Motorola’s Wireless Technology - Technology Review

The Evolution of Motorola’s Wireless Technology - Technology Review

segunda-feira, 15 de agosto de 2011

American Society for Quality - Electronics & Communications Division

Pessoal,

Estou divulgando a Divisão de Eletrônica & Comunicação da Sociedade Americana para a Qualidade.

Esta Divisão trata de assuntos relacionados a Confiabilidade, Disponibilidade e Manutenabilidade para a busca pela excelência através das práticas de qualidade. A divisão é composta por 5 comitês técnicos, a saber: Nanotecnologia, Eletrônica, Comunicação, Substâncias Restritas (eliminação de chumbo da solda) e Sarbanes-Oxley.

A divisão possui um grande repositório de conhecimento através de papers sobre estes assuntos. Vale uma passadinha no site
 Avisem me se houver interesse.
Existe também os webinars e posso indicá-los para participar. Não custa nada e podem ser atendidos mesmo na nossa rotina diária de trabalho.
www.asq.org/ec/index.html

New study proves that much-sought exotic quantum state of matter can exist

        <p><a href="http://www.physorg.com/news/2011-08-much-sought-exotic-quantum-state.html%22%3ENew study proves that much-sought exotic quantum state of matter can exist</a></p>
        <p>(PhysOrg.com) -- The world economy is becoming ever more reliant on high tech electronics such as computers featuring fingernail-sized microprocessors crammed with billions of transistors. For progress to continue, for Moore’s Law---according to which the number of computer components crammed onto microchips doubles every two years, even as the size and cost of components halves---to continue, new materials and new phenomena need to be discovered.</p>
      

domingo, 14 de agosto de 2011

A Battery You Can See Through

Transparent batteries could lead to designs for cell phones and other gadgets.

Researchers at Stanford University have made fully transparent batteries, the last missing component needed to make transparent displays and other electronic devices.

Stanford materials science professor Yi Cui, who led the work, says a tremendous amount of research goes into making batteries store more energy for longer, but little attention has been paid to making them "more beautiful, and fancier."

Researchers have previously made transparent variations on other major classes of electronics, including transistors and the components used to control displays, but not yet batteries. "And if you can't make the battery transparent, you can't make the gadget transparent," says Cui.

Some battery components are easier to make using transparent materials than others. The electrodes are the tricky part, says Cui. One way to make a transparent electrode is to make it very thin, on the order of about 100 nanometers thick. But a thin electrode typically can't store enough energy to be useful.

Another approach is to make the electrode in the form of a pattern that has features smaller than the naked eye can see. As long as there is enough total electrode material in the battery, this type of electrode can still store a significant amount of energy. Cui designed a mesh electrode where all the lines of the mesh are on the order of 50 micrometers, effectively invisible, and the squares inside the mesh contain no battery materials.
Fabrication is also tricky, since the usual methods for making components at this resolution require harsh chemical processes that damage battery materials. The Stanford group instead used a relatively simple method to make the transparent mesh electrodes, which are held together inside a clear, squishy polymer called PDMS.

They start by using lithography to make a mold on a silicon wafer. Then they pour liquid PDMS over the mold, cure the polymer to solidify it, and peel it off the mold. The PDMS sheet is then engraved with a grid of narrow channels. Next they drip a solution of electrode materials onto the surface of the PDMS. Capillary action pulls the materials in until they have filled all the channels to create the mesh. The researchers used standard lithium-ion battery materials to make their electrodes.

To make the complete battery, they sandwich a clear gel electrolyte between the two electrodes, and put it all inside a protective plastic wrapping. The Stanford researchers created prototypes, and used them to power an LED whose light can be seen through the battery itself.

Cui says these batteries should, in theory, be able to store about half as much energy as an equivalent-sized opaque battery, because there is a trade-off between energy density and transparency. They can lay down a thicker mesh of electrode materials to store more energy, but that means less light will get through.
So far, his lab's prototypes can store 20 watt-hours per liter, about as much energy as a nickel-cadmium battery, but Cui expects to improve this by an order of magnitude, in part by reducing the thickness of the polymer substrate, and by making the trenches that hold the electrode materials deeper.

Another way to store more energy without sacrificing transparency would be to stack multiple cells on top of one another in such a way that the grid of the electrodes lined up, allowing light to pass through. So far, the group has made electrodes that are about an inch across, but Cui says they could be made much larger, and the material could simply be cut to the desired size.
Copyright Technology Review 2011.

quinta-feira, 4 de agosto de 2011

New study shows how to eliminate motion sickness on tilting trains

        <p><a href="http://www.physorg.com/news/2011-08-motion-sickness-tilting.html%22%3ENew study shows how to eliminate motion sickness on tilting trains</a></p>
        <p>An international team of researchers led by scientists at Mount Sinai School of Medicine have found that motion sickness on tilting trains can be essentially eliminated by adjusting the timing  of when the cars tilt as they enter and leave the curves. They found that when the cars tilt just at the beginning of the curves instead of while they are making the turns, there was no motion sickness.  The findings were published online Monday, July 25 in the Federation of American Societies for Experimental Biology (FASEB) Journal.</p>
      

quarta-feira, 3 de agosto de 2011

100-year-old model of the atom to be celebrated

A series of public lectures taking place next week will look at the legacy of Rutherford’s discovery and give citizens of Manchester the chance to join nuclear physicists from around the world in celebrating his 100-year-old model of the atom.

The lectures will explain how fundamental physics has moved on from Rutherford’s discovery to the huge and elaborate experiments taking place in the Large Hadron Collider (LHC); how medical physics is underpinned by our improved understanding of the atom; and, finally, the power generated by the splitting of the atom, and nuclear power’s safety record. 

The public lectures accompany the Institute of Physics’ (IOP) academic conference, ‘The Rutherford Centennial Conference on Nuclear Physics’ , as it was 100 years ago, in 1911, as chair of physics at the University of Manchester that Ernest Rutherford – now deemed the father of nuclear physics – devised the now familiar model of the atom.

When Rutherford postulated that atoms are composed of very small positively charged nuclei and orbited by electrons, he kick-started 100 years of nuclear physics.
From Monday 8 August to Wednesday 10 August, starting at 19.30 each evening, there will be three public evening lectures at the University of Manchester, Oxford Road, exploring the profound legacy of this incredible scientist.

On Monday 8 August, Dr David Jenkins, an experimental nuclear physicist from the University of York, will explain why physicists are still studying the atomic nucleus 100 years after Rutherford’s discovery and show how the science has evolved from the small scale work of Rutherford to the excitement of the LHC.
On Tuesday 9 August, Professor Alan Perkins, President of the British Nuclear Medicine Society, will look at how our understanding of the atom and its radioactive by-products have been used in medicine – from atomic soda and radioactive ointment to today’s radiopharmaceuticals.

Rounding off, on Wednesday 10 August, John Roberts, Nuclear Fellow at the Dalton Nuclear Institute in the University of Manchester, will ask if the negative perception of nuclear energy is deserved or whether the facts tell a different story about the safety of nuclear energy.

Dr David Jenkins said, “As hundreds of physicists descend on Manchester to celebrate Rutherford’s achievements and discuss the latest developments in nuclear physics, the three talks offer an opportunity for all to hear about the significance of Rutherford’s discovery and the effect it has had on our lives.”

terça-feira, 2 de agosto de 2011

Mundo quântico

 
Ignorância quântica: conhecer as partes não garante o conhecimento do todo

Redação do Site Inovação Tecnológica - 02/08/2011

Ignorância quântica: conhecer as partes não garante o conhecimento do todo
O professor Coruja sabe que Joãozinho não sabe tudo sobre o E mas, em um mundo quântico, não consegue desmascarar sua ignorância.[Imagem: Vidick et al.]

O todo e as partes

Há séculos, os métodos indutivo e dedutivo têm permitido que os cientistas caminhem do todo para as partes, e das partes para o todo, aprimorando o conhecimento do mundo natural a cada passo dado nessas caminhadas.

Essa aquisição de conhecimento é possível porque há uma estreita ligação entre o conhecimento do todo e o conhecimento das partes - quanto mais partes você conhece, mas entende do todo, e, quanto mais você entende o todo, maior compreensão tem de cada uma de suas partes.

Mas a esquisita teoria quântica garante que nada é garantido: o mundo quântico, ao contrário do mundo clássico, permite que você responda questões corretamente mesmo quando você não tem toda a informação que precisaria ter.

É isso que garantem Thomas Vidick e Stephanie Wehner, da Universidade Nacional de Cingapura.

Ignorância clássica

Se você não entende nada de mecânica quântica, não se preocupe: o celebrado físico Richard Feynman dizia que ninguém entende a mecânica quântica.

Você apenas precisa saber que os dois físicos usaram a teoria quântica para demonstrar que é possível responder corretamente a praticamente qualquer questão sobre as partes de um sistema sem precisar conhecer o sistema inteiro.

Ou que o conhecimento do todo não garante que você conseguirá responder tudo sobre suas partes.

Imagine, por exemplo, que um aluno chega para um exame tendo lido apenas metade do livro que lhe ensinaria a matéria.

O aluno tem um conhecimento incompleto sobre o todo - o livro - e terá que responder questões sobre suas partes - os assuntos contidos em suas páginas.

O senso comum diria que o aluno será rapidamente desmascarado, tão logo ele chegue nas questões cujas respostas estão nas páginas que ele não leu.

E, no mundo clássico, isso sempre foi e continuará sendo assim.

A teoria quântica, porém, não concorda com esse senso comum, e afirma que o aluno poderá se sair muito bem no teste - ele poderá até mesmo tirar um 10 - se, em vez das informações clássicas de um livro clássico, estivermos lidando com informações quânticas.

Ignorância quântica: conhecer as partes não garante o conhecimento do todo
"Nós observamos esse efeito, mas de fato ainda não entendemos de onde ele surge," admite a Dra. Stephanie Wehner. [Imagem: CQT]

Ignorância quântica

No caso das informações quânticas, não há como saber qual informação o aluno não detém - nem mesmo se o professor ficar sabendo de antemão qual parte do livro o aluno leu.

Ou seja, a chamada ignorância quântica estará preservada, e o aluno jamais será desmascarado.

"Nós observamos esse efeito, mas de fato ainda não entendemos de onde ele surge," admite Wehner, acrescentando que, conceitualmente, é algo muito esquisito e, uma vez mais, provando que Richard Feynman tinha razão.

E não espere por explicações mais claras mais tarde, porque uma compreensão intuitiva dos fenômenos quânticos parece ser algo inatingível.

Essa impossibilidade de desmascarar uma ignorância quântica parece ter vindo para se somar a uma série de outros fenômenos quânticos, impossíveis de se curvarem a uma explicação mecanicista tradicional.

Intuição quântica

"Uma questão central do nosso entendimento do mundo físico é o quão nosso conhecimento do todo se relaciona com o nosso conhecimento das partes individuais," descrevem os pesquisadores.

Ou, ao inverso, o quanto nossa ignorância sobre o todo impede o conhecimento de ao menos uma das suas partes.

"Baseando-nos puramente na intuição clássica, certamente estaríamos inclinados a conjecturar que uma forte ignorância do todo não pode vir sem uma significativa ignorância de ao menos uma de suas partes.

"Curiosamente, entretanto, essa conjectura é falsa na teoria quântica: nós fornecemos um exemplo explícito onde uma grande ignorância do todo pode coexistir com um conhecimento quase perfeito de cada uma de suas partes," concluem.

Segundo eles, além das implicações teóricas, ainda não totalmente compreendidas, a descoberta pode ter sérias implicações para as pesquisas em criptografia quântica, que pretende criar meios de codificar informações à prova de qualquer ataque.

A Dra. Wehner foi uma das autoras da descoberta surpreendente, feita há menos de um ano, de uma relação entre a chamada ação fantasmagórica à distância e o Princípio da Incerteza de Heisenberg.


Bibliografia:
Does Ignorance of the Whole Imply Ignorance of the Parts? Large Violations of Noncontextuality in Quantum Theory
T. Vidick, S. Wehner
Physical Review Letters
29 July 2011
Vol.: 107, 030402 (2011)
DOI: 10.1103/PhysRevLett.107.030402

Fonte: Site Inovação Tecnológica- www.inovacaotecnologica.com.br URL: http://www.inovacaotecnologica.com.br/noticias/noticia.php?artigo=ignorancia-quantica-partes-conhecimento-todo

Descoberta nova forma de gravar dados nos discos rígidos

Descoberta nova forma de gravar dados nos discos rígidos: "Quando se acreditava que os discos rígidos estivessem se aproximando da aposentadoria, eis que uma novidade pode dar-lhes uma pitada extra de longevidade."

segunda-feira, 1 de agosto de 2011

Transparent electronics from graphene-based electrodes (w/ Video)

        <p><a href="http://www.physorg.com/news/2011-08-transparent-electronics-graphene-based-electrodes-video.html%22%3ETransparent electronics from graphene-based electrodes (w/ Video)</a></p>
        <p>Flexible, transparent electronics are closer to reality with the creation of graphene-based electrodes at Rice University.</p>
      

terça-feira, 26 de julho de 2011

Microwave study bursts Hubble bubble - physicsworld.com

Microwave study bursts Hubble bubble - physicsworld.com

Tecnologia de discos rígidos dá Prêmio Nobel de Física a fundadores da spintrônica

Tecnologia de discos rígidos dá Prêmio Nobel de Física a fundadores da spintrônica: "Apenas nove anos depois da descoberta, considerado um avanço típico da ciência básica, chegaram ao mercado os primeiros discos rígidos com cabeças de leitura que funcionavam com base no efeito da magnetoresistência gigante (GMR)."

Eletrônica molecular e spintrônica convergem em molécula orgânica

Eletrônica molecular e spintrônica convergem em molécula orgânica: "Um efeito que valeu o Prêmio Nobel de Física foi observado pela primeira vez em uma única molécula orgânica, convergindo dois dos campos mais promissores para o futuro da tecnologia digital."

Cientistas descobrem super-átomo com escudo magnético

Cientistas descobrem super-átomo com escudo magnético

More than one way to search for SUSY

More than one way to search for SUSY

sexta-feira, 22 de julho de 2011

Tevatron experiments close in on favored Higgs mass range

The Higgs particle, if it exists, most likely has a mass between 114-137 GeV/c2, about 100 times the mass of a proton. This predicted mass range is based on stringent constraints established by earlier measurements, including the highest precision measurements of the top quark and W boson masses, made by Tevatron experiments. If the Higgs particle does not exist, Fermilab’s Tevatron experiments are on track to rule out this Higgs mass range in 2012.

If the Higgs particle does exist, then the Tevatron experiments may soon begin to find an excess of Higgs-like decay events. With the number of collisions recorded to date, the Tevatron experiments are currently unique in their ability to study the decays of Higgs particles into bottom quarks. This signature is crucial for understanding the nature and behavior of the Higgs particle.

“Both the DZero and CDF experiments have now analyzed about two-thirds of the data that we expect to have at the end of the Tevatron run on September 30,” said Stefan Soldner-Rembold, co-spokesperson of the DZero experiment. “In the coming months, we will continue to improve our analysis methods and continue to analyze our full data sets. The search for the Higgs boson is entering its most exciting, final stage.”

For the first time, the CDF and DZero collaborations have successfully applied well-established techniques used to search for the Higgs boson to observe extremely rare collisions that produce pairs of heavy bosons (WW or WZ) that decay into heavy quarks. This well-known process closely mimics the production of a W boson and a Higgs particle, with the Higgs decaying into a bottom quark and antiquark pair—the main signature that both Tevatron experiments currently use to search for a Higgs particle. This is another milestone in a years-long quest by both experiments to observe signatures that are increasingly rare and similar to the Higgs particle.

“This specific type of decay has never been measured before, and it gives us great confidence that our analysis works as we expect, and that we really are on the doorsteps of the Higgs particle,” said Giovanni Punzi, co-spokesperson for the CDF collaboration.

To obtain their latest Higgs search results, the CDF and DZero analysis groups separately sifted through more than 700,000 billion proton-antiproton collisions that the Tevatron has delivered to each experiment since 2001. After the two groups obtained their independent Higgs search results, they combined their results. Tevatron physicist Eric James will present the joint CDF-DZero search for the on Wednesday, July 27, at the EPS conference.
Provided by Fermilab (news : web)

terça-feira, 19 de julho de 2011

Chip 3D integra processador e memória

Chip 3D integra processador e memória: "O objetivo é estabelecer os parâmetros de operação de um processador comercial encapsulado em um chip 3D, sobretudo com relação à dissipação de calor."

New battery engineering

It is hard to recall a new automotive technology that has received as much attention from nonautomotive media as vehicle electrification and, more specifically, batteries. Environmental publications debate the impact of electric vehicles (EVs) on planet Earth; business publications debate the viability and profitability of battery suppliers; investment publications analyze start-up EV and battery company IPOs; trade journals for minerals and raw materials debate supplies of lithium and rare earth metals; national news outlets cover a U.S. president’s visit to a battery plant groundbreaking; and Jay Leno holds celebrity EV races.

But for all their glamor and promise, emerging EV technologies face age-old challenges in their quest for introduction into high-volume production—namely, demonstration of safety, quality, and value. OEMs will not and cannot accept even the most promising new technologies until these fundamental attributes are demonstrated.

Large-format lithium-ion batteries are a good “new-tech” example. Consider the complexity: electrochemical reactions + embedded electronics and software + large package geometry and mass + high voltage + thermal management. Traditional engineering disciplines are more important than ever in this domain: failure mode effects and analysis, computer-aided engineering, computational fluid dynamics, and electronics circuit simulation, to name a few.

Battery-cell design alone requires optimization of chemistries for anodes, cathodes, and electrolytes; unique mechanical properties of separators; bonding techniques for electrodes, tabs, and casings; and embedded safety features. Electronic circuits measure and report voltages, currents, and temperatures while also controlling relays and thermal management components (fans, pumps, chillers). Embedded software algorithms calculate state-of-charge and power limits, perform real-time diagnostics, and make decisions on when to cool the pack, balance cells, or stop taking charge.

Systems engineering takes a lead role in tying these complex subsystems together. For instance, battery “life” (that is, the period of time over which energy storage capacity is reduced to a predetermined percentage of its beginning-of-life value) is dependent on many factors, such as cumulative time of exposure to high temperatures, number and nature of charge/discharge cycles, and mechanical integrity of the cell retention structures.

The battery systems engineer, therefore, must consider trade-offs in several areas to ensure battery life exceeds 10 years at expected levels of performance while not adding features that increase cost or mass. For example, investing in higher accuracy voltage and current measurement methods will allow cells to be operated closer to their full Vmin/Vmax range; active heating and cooling can extend cell life and range of operation but adds mass and controls complexity. How much electronics integration should occur at the module level vs. a pack-level centralized circuit topology?

The automobile—and automobile engineering—has been a remarkably complex and resilient industry over the past 100 years. More recently, vehicle electrification has attracted government grants, venture capitalism, Hollywood, and “Green Faith” to the conversation. However, commercially viable volumes will not be realized until safety, quality, and value are firmly established. And those attributes are the essence of good automotive engineering. It’s what we do.

Sensitive to electrostatic effects

With the current high price of fuels, the mind-set of most people regarding ownership of large inefficient vehicles has changed quite a bit. The request for fuel-saving cars, including hybrid cars and even completely electric cars, is getting louder.

These types of vehicles need a number of electronic control units (ECUs) for more efficient engine management, fuel-saving operation, battery operation, and the like. Not only will power management require increased electronics in the car; so will safety features (ABS, airbags, distance control, etc.) and “luxury features” such as Internet access.

Additionally, hydraulic units (such as those for steering) in past automotive design will be done electrically. Vehicles are now evolving into computers on wheels, which can be used to travel from A to B with high speed, high comfort, and, of course, high safety. The electronic parts of this “driving” computer are getting smaller and smaller and more sensitive to electrostatic effects—but their reliability must be maintained.

Protection against electrostatic discharge (ESD) can be realized to a certain extent on the semiconductor itself. This on-chip protection is guaranteeing safe handling at the ECU manufacturer in an ESD protected area having basic ESD protective measures. Manufacturers are following international standards such as ANSI/ESD S20.20 or IEC 61340-5-1 so that even very sensitive parts can safely be handled.
The ECU will then be implemented in the vehicle, or certain parts of it (such as door modules), at the car manufacturer or its direct suppliers. These assembly lines often have no or very limited ESD handling measures. Therefore, the ECU must possess a much higher robustness against ESD.

There is always a debate as to what the right (voltage) level of protection is and what the right test to guarantee this robustness is. Is it more efficient to make the semiconductor devices themselves more robust or to increase the protection on board? Resistors and capacitors that are needed for performance or electromagnetic compatibility (EMC) are common measures. Extra-protective elements such as transient voltage suppression diodes are, of course, an excellent choice but could affect the performance. They are considered additional elements to be placed on board for assembly, which is a source of failure the manufacturer wants to avoid, and they add extra cost to the ECU and therefore the car.

All of these possibilities have advantages and disadvantages depending on the specific application, cost pressure, and expertise of the board designer. In the past, automotive suppliers and original equipment manufacturers did not discuss this topic in an effective way.

The challenge for the creators of the next generation of cars will be to develop the right ESD protection needed at the best price, the most efficient way, and with the highest reliability. Therefore, the dialogue between the car manufacturer, the ECU designer, and the semiconductor manufacturer (who has started the discussion of the ESD topic) must be expanded to avoid ESD problems in the field.

segunda-feira, 18 de julho de 2011

Memristor - missing fourth electronic circuit element

Researchers at HP Labs have built the first working prototypes of an important new electronic component that may lead to instant-on PCs as well as analog computers that process information the way the human brain does.
The new component is called a memristor, or memory resistor. Up until today, the circuit element had only been described in a series of mathematical equations written by Leon Chua, who in 1971 was an engineering student studying non-linear circuits. Chua knew the circuit element should exist — he even accurately outlined its properties and how it would work. Unfortunately, neither he nor the rest of the engineering community could come up with a physical manifestation that matched his mathematical expression.
Thirty-seven years later, a group of scientists from HP Labs has finally built real working memristors, thus adding a fourth basic circuit element to electrical circuit theory, one that will join the three better-known ones: the capacitor, resistor and the inductor.
Researchers believe the discovery will pave the way for instant-on PCs, more energy-efficient computers, and new analog computers that can process and associate information in a manner similar to that of the human brain.
According to R. Stanley Williams, one of four researchers at HP Labs’ Information and Quantum Systems Lab who made the discovery, the most interesting characteristic of a memristor device is that it remembers the amount of charge that flows through it.
Indeed, Chua’s original idea was that the resistance of a memristor would depend upon how much charge has gone through the device. In other words, you can flow the charge in one direction and the resistance will increase. If you push the charge in the opposite direction it will decrease. Put simply, the resistance of the devices at any point in time is a function of history of the device –- or how much charge went through it either forwards or backwards. That simple idea, now that it has been proven, will have profound effect on computing and computer science.
"Part of what’s going to come out of this is something none of us can imagine yet," says Williams. "But what we can imagine in and of itself is actually pretty cool."
For one thing, Williams says these memristors can be used as either digital switches or to build a new breed of analog devices.
For the former, Williams says scientists can now think about fabricating a new type of non-volatile random access memory (RAM) – or memory chips that don’t forget what power state they were in when a computer is shut off.
That’s the big problem with DRAM today, he says. "When you turn the power off on your PC, the DRAM forgets what was there. So the next time you turn the power on you’ve got to sit there and wait while all of this stuff that you need to run your computer is loaded into the DRAM from the hard disk."
With non-volatile RAM, that process would be instantaneous and your PC would be in the same state as when you turned it off.
Scientists also envision building other types of circuits in which the memristor would be used as an analog device.
Indeed, Leon himself noted the similarity between his own predictions of the properties for a memristor and what was then known about synapses in the brain. One of his suggestions was that you could perhaps do some type of neuronal computing using memristors. HP Labs thinks that’s actually a very good idea.
"Building an analog computer in which you don’t use 1s and 0s and instead use essentially all shades of gray in between is one of the things we’re already working on," says Williams. These computers could do the types of things that digital computers aren’t very good at –- like making decisions, determining that one thing is larger than another, or even learning.
While a lot of researchers are currently trying to write a computer code that simulates brain function on a standard machine, they have to use huge machines with enormous processing power to simulate only tiny portions of the brain.
Williams and his team say they can now take a different approach: "Instead of writing a computer program to simulate a brain or simulate some brain function, we’re actually looking to build some hardware based upon memristors that emulates brain-like functions," says Williams.
Such hardware could be used to improve things like facial recognition technology, and enable an appliance to essentially learn from experience, he says. In principle, this should also be thousands or millions of times more efficient than running a program on a digital computer.
The results of HP Labs teams findings will be published in a paper in today’s edition of Nature. As far as when we might see memristors actually being used in actual commercial devices, Williams says the limitations are more business oriented than technological.
Ultimately, the problem is going to be related to the time and effort involved in designing a memristor circuit, he says. "The money invested in circuit design is actually much larger than building fabs. In fact, you can use any fab to make these things right now, but somebody also has to design the circuits and there’s currently no memristor model. The key is going to be getting the necessary tools out into the community and finding a niche application for memristors. How long this will take is more of a business decision than a technological one."
Image: An atomic force microscope image of a simple circuit with 17 memristors lined up in a row.  Each memristor has a bottom wire that contacts one side of the device and a top wire that contacts the opposite side.  The devices act as ‘memory resistors’, with the resistance of each device depending on the amount of charge that has moved through each one. The wires in this image are 50 nm wide, or about 150 atoms in total width.  Image courtesy of J. J. Yang, HP Labs.

After almost 20 years, math problem falls

        <p><a href="http://www.physorg.com/news/2011-07-years-math-problem-falls.html%22%3EAfter almost 20 years, math problem falls</a></p>
        <p>Mathematicians and engineers are often concerned with finding the minimum value of a particular mathematical function. That minimum could represent the optimal trade-off between competing criteria — between the surface area, weight and wind resistance of a car’s body design, for instance. In control theory, a minimum might represent a stable state of an electromechanical system, like an airplane in flight or a bipedal robot trying to keep itself balanced. There, the goal of a control algorithm might be to continuously steer the system back toward the minimum.</p>
      

quinta-feira, 14 de julho de 2011

Galaxy sized twist in time pulls violating particles back into line

        <p><a href="http://www.physorg.com/news/2011-07-galaxy-sized-violating-particles-line.html%22%3EGalaxy sized twist in time pulls violating particles back into line</a></p>
        <p>(PhysOrg.com) -- University of Warwick physicist has produced a galaxy sized solution which explains one of the outstanding puzzles of particle physics, while leaving the door open to the related conundrum of why different amounts of matter and antimatter seem to have survived the birth of our Universe.</p>
      

quarta-feira, 13 de julho de 2011

Nanotechnology researchers go ballistic over graphene

Nanotechnology researchers go ballistic over graphene

Valetrônica: a nova eletrônica do grafeno

Valetrônica: a nova eletrônica do grafeno: "Como não há analogia entre a nascente valetrônica e a eletrônica baseada no silício, é difícil prever as possibilidades de exploração dos vales do grafeno. Mas os cientistas apostam alto."

Grafeno: até os defeitos são belos e funcionais

Grafeno: até os defeitos são belos e funcionais: "Além de propriedades imbatíveis quando puro, o grafeno apresenta alguns defeitos que bem poderiam ser chamados de virtudes."

Chip de safira faz fóton ter duas cores ao mesmo tempo

Chip de safira faz fóton ter duas cores ao mesmo tempo: "No mundo quântico, um fóton pode ser azul e amarelo ao mesmo tempo, sem se tornar verde. Ou vermelho e laranja, igualmente ao mesmo tempo, sem se tornar laranja."

Lente fotocromática elétrica muda de cor instantaneamente

Lente fotocromática elétrica muda de cor instantaneamente: "A nova lente de transição muda de cor reagindo à passagem de uma corrente elétrica - desta forma, a transição é virtualmente imediata."

terça-feira, 12 de julho de 2011

Macromolecules get time to shine

        <p><a href="http://www.physorg.com/news/2011-07-supramolecules.html%22%3ESupramolecules get time to shine</a></p>
        <p>(PhysOrg.com) -- What looks like a spongy ball wrapped in strands of yarn -- but a lot smaller -- could be key to unlocking better methods for catalysis, artificial photosynthesis or splitting water into hydrogen, according to Rice University chemists who have created a platform to analyze interactions between carbon nanotubes and a wide range of photoluminescent materials.</p>
      

How to make a superlens from a few cans of cola - physicsworld.com

How to make a superlens from a few cans of cola - physicsworld.com

Antenas encolhem e se aproximam do limite teórico

Antenas encolhem e se aproximam do limite teórico: "Ela se parece com uma lente de contato, mas é uma antena flexível e muito miniaturizada - na verdade, muito próxima do limite de miniaturização de uma antena."

O Universo tem um eixo central de rotação?

O Universo tem um eixo central de rotação?: "Ao contrário do que propõe a teoria atual, novos dados sugerem que o Universo pode ter nascido girando, em um movimento de rotação cósmico."

segunda-feira, 11 de julho de 2011

New way to produce antimatter-containing atom discovered

        <p><a href="http://www.physorg.com/news/2011-07-antimatter-containing-atom.html%22%3ENew way to produce antimatter-containing atom discovered</a></p>
        <p>Physicists at the University of California, Riverside report that they have discovered a new way to create positronium, an exotic and short-lived atom that could help answer what happened to antimatter in the universe, why nature favored matter over antimatter at the universe's creation.</p>
      

domingo, 10 de julho de 2011

Automontagem molecular: vêm aí os chips que se constroem sozinhos

Automontagem molecular: vêm aí os chips que se constroem sozinhos: "Moléculas de polímeros agarram-se a pequenos pilares e se organizam de forma totalmente autônoma, formando todos os sete desenhos básicos considerados essenciais para a fabricação de circuitos eletrônicos."

Onde eletrônica, spintrônica e computação quântica se encontram

Onde eletrônica, spintrônica e computação quântica se encontram: "Interagindo com moléculas magnéticas individuais, cientistas alcançaram uma tríplice fronteira onde memória, lógica e lógica quântica podem ser integradas."

sábado, 9 de julho de 2011

New 'cooler' technology offers fundamental breakthrough in heat transfer for microelectronics

Sandia’s “Cooler” technology offers fundamental breakthrough in heat transfer for microelectronics, other cooling applications
In this diagram of the Sandia Cooler, heat is transferred to the rotating cooling fins. Rotation of the cooling fins eliminates the thermal bottleneck typically associated with a conventional CPU cooler. (Diagram courtesy of Jeff Koplow)
(PhysOrg.com) -- Sandia National Laboratories has developed a new technology with the potential to dramatically alter the air-cooling landscape in computing and microelectronics, and lab officials are now seeking licensees in the electronics chip cooling field to license and commercialize the device.

MIT spinout unveils new more powerful direct-diode laser

Enlarge
(PhysOrg.com) -- TeraDiode, a spinout company from MIT and located nearby in Littleton, MA, has unveiled, a new powerful direct-diode laser capable of cutting all the way through steel up to half an inch thick at various speeds. The laser is based on technology developed by company co-founders Dr. Bien Chann and Dr. Robin Huang while still at MIT.

CERN lança iniciativa de hardware aberto

CERN lança iniciativa de hardware aberto: "O Hardware Aberto pretende fazer pelos equipamentos o que o conceito de Software Livre faz pelos programas de computador."

sexta-feira, 8 de julho de 2011

Free to download: July’s Physics World

By Matin Durrani

PWJul11cover-iop.jpg
It is perhaps a little-known fact that Griffin – the main character in H G Wells’ classic novel The Invisible Man – was a physicist. In the 1897 book, Griffin explains how he quit medicine for physics and developed a technique that made himself invisible by reducing his body’s refractive index to match that of air.

While Wells’ novel is obviously a work of fiction, the quest for invisibility has made real progress in recent years – and is the inspiration for this month’s special issue of Physics World, which you can download for free via this link.

Kicking off the issue is Sidney Perkowitz, who takes us on a whistle-stop tour of invisibility through the ages – from its appearance in Greek mythology to camouflaging tanks on the battlefield – before bringing us up to date with recent scientific developments.

Ulf Leonhardt then takes a light-hearted look at the top possible applications of invisibility science. Hold on to your hats for invisibility cloaks, perfect lenses and the ultimate anti-wrinkle cream.

Some of these applications might be years away, but primitive invisibility cloaks have already been built, with two independent groups of researchers having recently created cloaks operating with visible light that can conceal centimetre-scale objects, including a paper clip, a steel wedge and a piece of paper. But as Wenshan Cai and Vladimir Shalaev explain, these cloaks only work under certain conditions, namely with polarized light, in a 2D configuration and with the cloak immersed in a high-refractive-index liquid. It seems that the holy grail of hiding macroscopic objects viewed from any angle using unpolarized visible light is still some way off.

The special issue ends with a look at something even more fantastic-sounding – the possibility of creating a cloak that works not just in space but in space–time. Although no such “event cloak” has yet been built, Martin McCall and Paul Kinsler outline the principles of how it would work and describe what might be possible with a macroscopic, fully functioning device that conceals events from view. These applications range from the far-fetched, such as the illusion of a Star Trek-style transporter, to the more mundane, such as controlling signals in an optical routing system.

But, hey, that’s enough of me banging on about the special issue. Download it for free now and find out for yourself. And don’t forget to let us know what you think by e-mailing us at pwld@iop.org or via our Facebook or Twitter pages.

P.S. If you’re a member of the Institute of Physics, you can in addition read the issue in digital form via this link, where you can also listen to, search, share, save, print, archive and even translate individual articles. How’s that for value?
 

TrackBack

TrackBack URL for this entry:
http://www.iop.org/mt4/mt-tb.cgi/4154

quinta-feira, 7 de julho de 2011

Chips supercondutores poderão se tornar realidade

Chips supercondutores poderão se tornar realidade: "Descoberto o germânio supercondutor, um feito surpreendente porque esse elemento não era considerado um candidato para substituir o silício, cujos limites físicos de miniaturização estão se aproximando rapidamente."

Was the Space Shuttle a Mistake?

The program's benefits weren't worth the cost—and now the U.S. is in jeopardy of repeating the same mistake, says a leading space policy expert.

Forty years ago, I wrote an article for Technology Review titled "Shall We Build the Space Shuttle?" Now, with the 135th and final flight of the shuttle at hand, and the benefit of hindsight, it seems appropriate to ask a slightly different question—"Should We Have Built the Space Shuttle?"

After the very expensive Apollo effort, a low-cost space transportation system for both humans and cargo was seen as key to the future of the U.S. space program in the 1980s and beyond. So developing some form of new space launch system made sense as the major NASA effort for the 1970s, presuming the United States was committed to continuing space leadership. But it was probably a mistake to develop this particular space shuttle design, and then to build the future U.S. space program around it.

The selection in 1972 of an ambitious and technologically challenging shuttle design resulted in the most complex machine ever built. Rather than lowering the costs of access to space and making it routine, the space shuttle turned out to be an experimental vehicle with multiple inherent risks, requiring extreme care and high costs to operate safely. Other, simpler designs were considered in 1971 in the run-up to President Nixon's final decision; in retrospect, taking a more evolutionary approach by developing one of them instead would probably have been a better choice.

The shuttle does, of course, leave behind a record of significant achievements. It is a remarkably capable vehicle. It has carried a variety of satellites and spacecraft to low-Earth orbit. It serviced satellites in orbit, most notably during the five missions to the Hubble Space Telescope. On a few flights, the shuttle carried in its payload bay a small pressurized laboratory, called Spacelab, which provided research facilities for a variety of experiments. That laboratory was a European contribution to the space shuttle program. With Spacelab and the Canadian-provided robotic arm used to grab and maneuver payloads, the shuttle set the precedent for intimate international cooperation in human spaceflight. The shuttle kept American and allied astronauts flying in space and opened up the spaceflight experience to scientists and engineers, not just test pilots. The space shuttle was a source of considerable pride for the United States; images of a shuttle launch are iconic elements of American accomplishment and technological leadership.

But were these considerable benefits worth the $209.1 billion (in 2010 dollars) that the program cost? I doubt it. The shuttle was much more expensive than anyone anticipated at its inception. Then-NASA administrator James Fletcher told Congress in 1972 that the shuttle would cost $5.15 billion to develop and could be operated at a cost of $10.5 million per flight. NASA only slightly overran development costs, which is normal for a challenging technological effort, but the cost of operating the shuttle turned out to be at least 20 times higher than was projected at the program's start. The original assumption was that the lifetime of the shuttle would be between 10 and 15 years. By operating the system for 30 years, with its high costs and high risk, rather than replacing it with a less expensive, less risky second-generation system, NASA compounded the original mistake of developing the most ambitious version of the vehicle. The shuttle's cost has been an obstacle to NASA starting other major projects.

But replacing the shuttle turned out to be difficult because of its intimate link to the construction of the space station. President Reagan approved the development of a space station in 1984, but the final design of what became the International Space Station (ISS) was not chosen until 1993. The first shuttle-launched element of the ISS was not orbited until 1998. It took 13 years to complete the ISS. Without the shuttle, construction of the ISS would have been impossible, leaving the U.S. with little choice but to keep the shuttle flying to finish the job. This necessity added almost two decades and billions of dollars of cost to the shuttle's operation. Whether the shuttle is ultimately viewed as successful will in large part be tied to the payoffs from the space station that it made possible. It will be years before those payoffs can be measured.

I have previously written that it was a policy mistake to choose the space shuttle as the centerpiece of the nation's post-Apollo space effort without agreeing on its goals (Science, May 30, 1986).

Today we are in danger of repeating that mistake, given Congressional and industry pressure to move rapidly to the development of a heavy lift launch vehicle without a clear sense of how that vehicle will be used.  Important factors in the decision to move forward with the shuttle were the desire to preserve Apollo-era NASA and contractor jobs, and the political impact of program approval on the 1972 presidential election. Similar pressures are influential today. If we learn anything from the space shuttle experience, it should be that making choices with multidecade consequences on such short-term considerations is poor public policy.

John M. Logsdon is professor emeritus at the Space Policy Institute, George Washington University, and author of John F. Kennedy and the Race to the Moon. In 2003, he was a member of the Columbia Accident Investigation Board.
Copyright Technology Review 2011.

terça-feira, 5 de julho de 2011

Sulphur Breakthrough Significantly Boosts Lithium Battery Capacity

Trapping sulphur particles in graphene cages produces a cathode material that could finally make lithium batteries capable of powering electric cars
Lithium batteries have become the portable powerhouses of modern society. If you own a phone, mp3 player or laptop, you will already own a lithium battery. More than likely, you will have several.

But good as they are, lithium batteries are not up to the demanding task of powering the next generation of electric vehicles. They just don't have enough juice or the ability to release it quickly over and over again.

The problem lies with the cathodes in these batteries. The specific capacities of the anode materials in lithium batteries are 370 mAh/g for graphite and 4200 mAh/g for silicon. By contrast, the cathode specific capacities are 170 mAh/g for LiFePO4 and only 150mAh/g for layered oxides.

So the way forward is clear: find a way to improve the cathode's specific capacity while maintaining all the other characteristics that batteries require, such as a decent energy efficiency and a good cycle life.

Today, Hailiang Wang and buddies at Stanford University say they've achieved a significant step towards this goal using sulphur as the cathode material of choice.

Chemists have known for many years that sulphur has potential: it has a theoretical specific capacity of 1672 mAh/g. But it also has a number of disadvantages, not least of these is the fact that sulphur is a poor conductor. On top of this, polysulphides tend to dissolve and wash away in many electrolytes while sulphur tends to swell during the discharge cycle causing it to crumble.

But Wang and co say they've largely overcome these problems using a few clever nanoengineering techniques to improve the performance. Their trick is to create submicron sulphur particles and coat them in a kind of plastic called polyethyleneglycol or PEG. This traps polysulphides and prevents them from washing away.

Next, Wang and co wrap the coated sulphur particles in a graphene cage. The interaction between carbon and sulphur renders the particles electrically conducting and also supports the particles as they swell and shrink during each charging cycle.

The result is a cathode that retains a specific capacity of more than 600 mAh/g over 100 charging cycles.
That's impressive. Such a cathode would immediately lead to rechargeable lithium batteries with a much higher energy density than is possible today. "It is worth noting that the graphene-sulfur composite could be coupled with silicon based anode materials for rechargeable batteries with significantly higher energy density than currently possible," say Wang and co

But there is more work ahead. Even though the material maintains a high specific capacity over 100 cycles, Wang and co say the capacity drops by 15 per cent in the process.

So they will be hoping, and indeed expecting, to improve on this as they further optimise the material.
The next step then is to create a working battery out of this stuff. Wang and co say they plan to couple it to a pre-lithiated silicon based anode to achieve this.

If it all works out (and that's a significant 'if'). your next car could be powered by Li-S batteries.

Ref:arxiv.org/abs/1107.0109: Graphene-Wrapped Sulfur Particles as a Rechargeable Lithium-Sulfur-Battery Cathode Material with High Capacity and Cycling Stability
Copyright Technology Review 2011.

sábado, 2 de julho de 2011

A Smarter, Stealthier Botnet

The "most technologically sophisticated" malware uses clever communications tricks and encryption to avoid disruption.

A new kind of botnet—a network of malware-infected PCs—behaves less like an army and more like a decentralized terrorist network, experts say. It can survive decapitation strikes, evade conventional defenses, and even wipe out competing criminal networks.

The botnet's resilience is due to a super-sophisticated piece of malicious software known as TDL-4, which in the first three months of 2011 infected more than 4.5 million computers around the world, about a third of them in the United States.

The emergence of TDL-4 shows that the business of installing malicious code on PCs is thriving. Such code is used to conduct spam campaigns and various forms of theft and fraud, such as siphoning off passwords and other sensitive data. It's also been used in the billion-dollar epidemic of fake anti-virus scams.

"Ultimately TDL-4 is simply a tool for maintaining and protecting a compromised platform for fraud," says Eric Howes, malware analyst for GFI Software, a security company. "It's part of the black service economy for malware, which has matured considerably over the past five years and which really needs a lot more light shed on it."

Unlike other botnets, the TDL-4 network doesn't rely on a few central "command-and-control" servers to pass along instructions and updates to all the infected computers. Instead, computers infected with TDL-4 pass along instructions to one another using public peer-to-peer networks. This makes it a "decentralized, server-less botnet," wrote Sergey Golovanov, a malware researcher at the Moscow-based security company Kaspersky Lab, on this blog describing the new threat.

"The owners of TDL are essentially trying to create an 'indestructible' botnet that is protected against attacks, competitors, and antivirus companies," Golovanov wrote. He added that it "is one of the most technologically sophisticated, and most complex-to-analyze malware."

The TDL-4 botnet also breaks new ground by using an encryption algorithm that hides its communications from traffic-analysis tools. This is an apparent response to efforts by researchers to discover infected machines and disable botnets by monitoring their communication patterns, rather than simply identifying the presence of the malicious code.

Demonstrating that there is no honor among malicious software writers, TDL-4 scans for and deletes 20 of the most common forms of competing malware, so it can keep infected machines all to itself. "It's interesting to mention that the features are generally oriented toward achieving perfect stealth, resilience, and getting rid of 'competitor' malware," says Costin Raiu, another malware researcher at Kaspersky.

Distributed by criminal freelancers called affiliates, who get paid between $20 and $200 for every 1,000 infected machines, TDL-4 lurks on porn sites and some video and file-storage services, among other places, where it can be automatically installed using vulnerabilities in a victim's browser or operating system.

Once TDL-4 infects a computer, it downloads and installs as many as 30 pieces of other malicious software—including spam-sending bots and password-stealing programs. "There are other malware-writing groups out there, but the gang behind [this one] is specifically targeted on delivering high-tech malware for profit," says Raiu.
Copyright Technology Review 2011.

Making Speedy Memory Chips Reliable

IBM researchers have developed a programming trick that makes it possible to more reliably store large amounts of data using a promising new technology called phase-change memory. The company hopes to start integrating this storage technology into commercial products, such as servers that process data for the cloud, in about five years.

Like flash memory, commonly found in cell phones, phase-change memory is nonvolatile. That means it doesn't require any power to store the data. And it can be accessed rapidly for fast boot-ups in computers and more efficient operation in general. Phase-change memory has a speed advantage over flash, and Micron and Samsung are about to bring out products that will compete with flash in some mobile applications.

These initial products will use memory cells that store one bit each. But for phase-change memory to be cost-competitive for broader applications, it will need to achieve higher density, storing multiple bits  per cell.

Greater density is necessary for IBM to achieve its goal of developing  phase-change memory  for high-performance systems such as servers that process and store Internet data much faster.

The IBM work announced today offers a solution. In the past, researchers haven't been able to make a device that uses multiple bits per cell that works reliably over months and years. That's because of the properties of the phase-change materials used to store the data. Scientists at IBM Research in Zurich have developed a software trick that allows them to compensate for this.

Each cell in these data-storage arrays is made up of a small spot of phase-change materials sandwiched between two electrodes. By applying a voltage across the electrodes, the material can be switched to any number of states along a continuum from totally unstructured to highly crystalline. The memory is read out by using another electrical pulse to measure the resistance of the material, which is much lower in the crystalline state.

To make multibit memory cells, the IBM group picked four different levels of electrical resistance. The trouble is that over time, the electrons in the phase-change cells tend to drift around, and the resistance changes, corrupting the data. The IBM group has shown that they can encode the data in such a way that when it's read out, they can correct for drift-based errors and get the right data.

The IBM group has shown that error-correcting code can be used to reliably read out data from a 200,000-cell phase-change memory array after a period of six months. "That's not gigabits, like flash, but it's impressive," says Eric Pop, professor of electrical engineering and computer sciences at the University of Illinois at Urbana-Champaign. "They're using a clever encoding scheme that seems to prolong the life and reliability of phase-change memory."

For commercial products, that reliability timescale needs to come up to 10 years, says Victor Zhirnov, director of special projects at the Semiconductor Research Corporation. IBM says it can get there. "Electrical drift in these materials is mostly problematic in the first microseconds and minutes after programming," says Harris Pozidis, manager of memory and probe technologies at IBM Research in Zurich. The problem of drift can be statistically accounted for in the IBM coding scheme over whatever timeframe is necessary, says Pozidis, because it occurs at a known rate.

But phase-change memory won't be broadly adapted until power consumption can be checked, says Zhirnov. It still takes much too much energy to flip the bits in these arrays. That's due to the way the electrodes are designed, and many researchers are working on the problem. This spring, Pop's group at the University of Illinois demonstrated storage arrays that use carbon nanotubes to encode phase-change memory cells with 100 times less power.
Copyright Technology Review 2011.

sexta-feira, 1 de julho de 2011

F1 gearing up for new 'green' and 'cool' future

July 1st, 2011 in Technology / Energy & Green Tech
Formula One is preparing itself for a period of progressive and 'green' change
















 
Paul di Resta of Great Britain and Force India drives during the Canadian Formula One Grand Prix at the Circuit Gilles Villeneuve on June 12 in Montreal, Canada. Formula One is preparing itself for a period of progressive change towards greater fuel efficiency, clearer 'green' credentials and much bigger popularity with car makers and racing fans.

Formula One is preparing itself for a period of progressive change towards greater fuel efficiency, clearer 'green' credentials and much bigger popularity with car makers and racing fans.

This vision of a future when 'green equals cool' and accelerates the sport's brand vision towards a more eco-friendly set of values suitable for a new age of low was spelt out by team chiefs this week.
Mercedes' boss Ross Brawn and McLaren's Martin Whitmarsh were members of a panel that met F1 supporters at 'meet the fans' forum organised by the Formula One Teams' Organisation (FOTA) at the McLaren headquarters on Thursday.

Referring to the introduction of new 1.6-litre V6 turbo-charged engines in 2014, Brawn said: "It's not only about the fact that the new engine is going to be more efficient in itself.

"It's the message it gives -- that it's cool to have a really efficient engine and race on a lot less fuel."
For decades, Formula One has been associated with huge levels of power, high levels of noise and fears of equally high levels of pollution, mostly associated with a perceived need to give the sport's fans a deafening experience of glorious, loud and penetrative .

Now, according to the new generation of team chiefs who are leading the forward march for the sport, those days are drawing to a close. The new engines, and their associated , were approved by the sport's governing body, the International Motoring Federation (FIA) on Wednesday.

The 'new age of F1' will usher in a improvement of at least 35 per cent, energy recovery systems, fuel restrictions, maximum revs of up to 15,000 rpm (down from 18,000) and an overall power ceiling of around 750 bhp.

"We're setting dramatic targets for reducing the amount of fuel we race with - 30 per cent, 40 per cent and even 50 per cent less than what we're racing on now, but still with the same power and the same excitement," explained Brawn.

He went on to explain that the new F1 vision was designed to chime with the efforts of modern in world of climate change, diminishing oil supplies and rising fuel prices.

But this progressive thinking was not adopted easily by the sport and faced severe opposition from many stake-holders including commercial ring-master Bernie Ecclestone, who had argued that the sport needed the noise developed by engines revving to 18,000 rpm.

Brawn added: "You're not going to get manufacturers coming in with the normally aspirated V8 we have now. The new engine creates opportunity for manufacturers to come in -- and that's a vital reason why we need a new engine with a more relevant specification for the manufacturers."

Whitmarsh was asked about the sport's future with free-to-air television and said that all the teams believed in it.

"All the FOTA teams believe in free-to-air," he said. "There will be parts of the market where there will be some differentiated service offered, but F1 teams are creating brand exposure and all the names we have on our cars require us to have a large audience.

"Our current contract requires it remains on free-to-air and the teams are going to safeguard our interests and those of the fans in this regard, but it isn't as simple as we must stay free-to-air and we must stay away from pay per view.

"We have to embrace (the understanding) that media is really multi-faceted. We have to make sure there is a mass free entry to be able to see Grands Prix, but there's an awful lot of people who want a lot more information than you're going to get on free-to-air."

The future of F1's broadcasting arrangements has been under scrutiny in recent weeks following interest from Rupert Murdoch's SKY organisation and reports suggesting that, in Britain, the BBC was not prepared to continue beyond its current contract.

(c) 2011 AFP
"F1 gearing up for new 'green' and 'cool' future." July 1st, 2011. http://www.physorg.com/news/2011-07-f1-gearing-green-cool-future.html