Single Molecules As Electric Conductors


Researchers from Graz University of Technology, Humboldt University in Berlin, M.I.T., Montan University in Leoben and Georgia Institute of Technology report an important advance in the understanding of electrical conduction through single molecules.

Minimum size, maximum efficiency: The use of molecules as elements in electronic circuits shows great potential. One of the central challenges up until now has been that most molecules only start to conduct once a large voltage has been applied. An international research team with participation of the Graz University of Technology has shown that molecules containing an odd number of electrons are much more conductive at low bias voltages. These fundamental findings in the highly dynamic research field of nanotechnology open up a diverse array of possible applications: More efficient microchips and components with considerably increased storage densities are conceivable.

One electron instead of two: Most stable molecules have a closed shell configuration with an even number of electrons. Molecules with an odd number of electrons tend to be harder for chemists to synthesize but they conduct much better at low bias voltages. Although using an odd rather than an even number of electrons may seem simple, it is a fundamental realization in the field of nanotechnology – because as a result of this, metal elements in molecular electronic circuits can now be replaced by single molecules. “This brings us a considerable step closer to the ultimate minitiurization of electronic components”, explains Egbert Zojer from the Institute for Solid State Physics of the Graz University of Technology.

Molecules instead of metal

The motivation for this basic research is the vision of circuits that only consist of a few molecules. “If it is possible to get molecular components to completely assume the functions of a circuit’s various elements, this would open up a wide array of possible applications, the full potential of which will only become apparent over time. In our work we show a path to realizing the highly electrically conductive elements”, Zojer excitedly reports the momentous consequences of the discovery.

Specific new perspectives are opened up in the field of molecular electronics, sensor technology or the development of bio-compatible interfaces between inorganic and organic materials: The latter refers to the contact with biological systems such as human cells, for instance, which can be connected to electronic circuits in a bio-compatible fashion via the conductive molecules.


http://portal.tugraz.at/portal/page/portal/TU_Graz

Clever Acoustics Help Blind People See The World


Video from portable cameras is analysed to calculate the distance of obstacles and predict the movements of people and cars. This information is then transformed and relayed to a blind person as a three-dimensional ‘picture’ of sound.

The concept is apparently simple and two prototypes have been successfully tested. Laser and digital video cameras become the eyes for the blind man and see the objects and activity going on around him.

Researchers from the University of Bristol have developed powerful real-time image processing and some clever algorithms to then identify objects and obstacles, such as trees, street furniture, vehicles and people. The system uses the stereo images to create a “depth map” for calculating distances. The system can also analyse moving objects and predict where they are going.

So much for the image processing, but how do you present this visual information to a blind person? Technology developed at the University of Laguna in Spain makes it possible to transform spatial information into three-dimensional acoustic maps.

A blind person wears headphones and hears how sounds change as they move around. The stereo audio system makes it possible to place sounds so that the brain can interpret them as a point in space. Sounds get louder as you walk towards objects, quieter as you move away. Objects to your right are heard on your right, and if you move your head the sound moves too. And if something is heading right for you, you'll hear it coming, with a tone that tells you to get out of the way.

The full picture

The EU-funded CASBLiP project was conceived to integrate the image processing and acoustic mapping technologies into a single, portable device that could be worn by blind people and help them to navigate outdoors.

The University of Laguna worked to adapt its acoustic mapping system and the University of Bristol refined its image processing algorithms. The device also incorporates a gyroscopic sensor developed by the University of Marche, Italy. This component, called the head-positioning sensor, detects how the wearer moves his head. It feeds back the position of the head and the direction it is facing, so that the relative position of the sounds being played to the wearer also move as expected. For example, if you turn your head towards a sound on the right, the sound must move left towards the centre of the sound picture.

Vision for the future

After three years, the consortium has produced two prototype devices mounted on a helmet. They have been tested successfully in trials by blind people in several real-world environments, including busy streets. Two blind institutions (the German Federation of the Blind and Partially Sighted and the Francesco Cavazza Institute, Italy) were heavily involved in the testing programme.

The first design (M1) uses a laser sensor developed by Siemens and originally intended to detect passengers in cars. It can calculate the distance to objects within 0 to 5m in a 60º field of view. The system is mounted inside glasses and cannot be seen by others because it uses infrared light. The M1 has been extensively tested by blind users who are able to recognise items, such as chairs and trees, from the sound picture they receive.

A second version (M2) adds two digital video cameras to either side of a helmet. It can detect moving objects and predict their path.

The University of Marche has also worked closely with the Cavazza Institute to build a complementary GPS location system. This technology could be used to pinpoint the location of a blind person and integrate the device with additional data sources, such as mapping services. It could provide the wearer with verbal directions to their destination.

“We know that the technology works,” says Guillermo Peris-Fajarnés, who coordinated the project from the Research Group on Graphic Technologies at the Universidad Politecnica de Valencia. “Our tests have been very successful and blind people have been able to navigate comfortably in controlled tests and even along a normal street.”

“There is still a lot of development work to do before this could go on the market, especially to prove that the system is 100% reliable,” Peris-Fajarnés notes. “You can't risk it going wrong while a user is crossing the road.”

He says the consortium has decided to continue work on this aspect beyond the end of the EU funding period.

Nevertheless, Peris-Fajarnés is confident that the device could be commercialised: “We are now looking for manufacturing partners to explore the possibilities for a commercially viable product. There's no other system like this available and it should complement existing aids, such as the white stick. But its commercial success will depend on miniaturising the system and mounting the cameras onto glasses.”

http://cordis.europa.eu./ictresults/

Solar plane to make public debut




Swiss adventurer Bertrand Picard is set to unveil a prototype of the solar-powered plane he hopes eventually to fly around the world.

The initial version, spanning 61m but weighing just 1,500kg, will undergo trials to prove it can fly at night.

Mr Picard, who made history by circling the globe non-stop in a balloon in 1999, says he wants to demonstrate the potential of renewable energies.

He expects to make a crossing of the Atlantic in 2012.

The flight would be a risky endeavour. Only now is solar and battery technology becoming mature enough to sustain flight through the night - and then only in unmanned planes.

But Picard's Solar Impulse team has invested tremendous energy - and no little money - in trying to find what they believe is a breakthrough design.

"I love this type of vision where you set the goal and then you try to find a way to reach it, because this is challenging," he told BBC News.

Testing programme

The HB-SIA has the look of a glider but is on the scale - in terms of its width - of a modern airliner.

The aeroplane incorporates composite materials to keep it extremely light and uses super-efficient solar cells, batteries, motors and propellers to get it through the dark hours.


Picard will begin testing with short runway flights in which the plane lifts just a few metres into the air.

As confidence in the machine develops, the team will move to a day-night circle. This has never been done before in a piloted solar-powered plane.

HB-SIA should be succeeded by HB-SIB. It is likely to be bigger, and will incorporate a pressurised capsule and better avionics.

It is probable that Picard will follow a route around the world in this aeroplane similar to the path he took in the record-breaking Breitling Orbiter 3 balloon - travelling at a low latitude in the Northern Hemisphere. The flight could go from the United Arab Emirates, to China, to Hawaii, across the southern US, southern Europe, and back to the UAE.

Measuring success

Although the vehicle is expected to be capable of flying non-stop around the globe, Picard will in fact make five long hops, sharing flying duties with project partner Andre Borschberg.

"The aeroplane could do it theoretically non-stop - but not the pilot," said Picard.

"We should fly at roughly 25 knots and that would make it between 20 and 25 days to go around the world, which is too much for a pilot who has to steer the plane.

"In a balloon you can sleep, because it stays in the air even if you sleep. We believe the maximum for one pilot is five days."

The public unveiling on Friday of the HB-SIA is taking place at Dubendorf airfield near Zürich.

"The real success for Solar Impulse would be to have enough millions of people following the project, being enthusiastic about it, and saying 'if they managed to do it around the world with renewable energies and energy savings, then we should be able to do it in our daily life'."

New Technique For Fabricating Nanowire Circuits


Applied scientists at Harvard University in collaboration with researchers from the German universities of Jena, Gottingen, and Bremen, have developed a new technique for fabricating nanowire photonic and electronic integrated circuits that may one day be suitable for high-volume commercial production.
Spearheaded by graduate student Mariano Zimmler and Federico Capasso, Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering, both of Harvard's School of Engineering and Applied Sciences (SEAS), and Prof. Carsten Ronning of the University of Jena, the findings will be published in Nano Letters. The researchers have filed for U.S. patents covering their invention.

While semiconductor nanowires---rods with an approximate diameter of one-thousandth the width of a human hair---can be easily synthesized in large quantities using inexpensive chemical methods, reliable and controlled strategies for assembling them into functional circuits have posed a major challenge. By incorporating spin-on glass technology, used in silicon integrated circuits manufacturing, and photolithography, transferring a circuit pattern onto a substrate with light, the team demonstrated a reproducible, high-volume, and low-cost fabrication method for integrating nanowire devices directly onto silicon.

"Because our fabrication technique is independent of the geometrical arrangement of the nanowires on the substrate, we envision further combining the process with one of the several methods already developed for the controlled placement and alignment of nanowires over large areas," said Capasso. "We believe the marriage of these processes will soon provide the necessary control to enable integrated nanowire photonic circuits in a standard manufacturing setting."

The structure of the team's nanowire devices is based on a sandwich geometry: a nanowire is placed between the highly conductive substrate, which functions as a common bottom contact, and a top metallic contact, using spin-on glass as a spacer layer to prevent the metal contact from shorting to the substrate. As a result current can be uniformly injected along the length of the nanowires. These devices can then function as light-emitting diodes, with the color of light determined by the type of semiconductor nanowire used.

To demonstrate the potential scalability of their technique, the team fabricated hundreds of nanoscale ultraviolet light-emitting diodes by using zinc oxide nanowires on a silicon wafer. More broadly, because nanowires can be made of materials commonly used in electronics and photonics, they hold great promise for integrating efficient light emitters, from ultraviolet to infrared, with silicon technology. The team plans to further refine their novel method with an aim towards electrically contacting nanowires over entire wafers.

"Such an advance could lead to the development of a completely new class of integrated circuits, such as large arrays of ultra-small nanoscale lasers that could be designed as high-density optical interconnects or be used for on-chip chemical sensing," said Ronning.

The team's co-authors are postdoctoral fellow Wei Yi and Venkatesh Narayanamurti, John A. and Elizabeth S. Armstrong Professor and dean, both of Harvard's School of Engineering and Applied Sciences; graduate student Daniel Stichtenoth, University of Gottingen; and postdoctoral fellow Tobias Voss, University of Bremen.

The research was supported by the National Science Foundation (NSF) and the German Research Foundation. The authors also acknowledge the support of two Harvard-based centers, the National Science Foundation Nanoscale Science and Engineering Center (NSEC) and the Center for Nanoscale Systems (CNS), a member of the National Nanotechnology Infrastructure Network (NNIN).


http://www.fas.harvard.edu/home/

High-speed Integrated Nanowire Circuits


Chemists and engineers at Harvard University have made robust circuits from minuscule nanowires that align themselves on a chip of glass during low-temperature fabrication, creating rudimentary electronic devices that offer solid performance without high-temperature production or high-priced silicon.

The researchers, led by chemist Charles M. Lieber and engineer Donhee Ham, produced circuits at low temperature by running a nanowire-laced solution over a glass substrate, followed by regular photolithography to etch the pattern of a circuit. Their merging of low-temperature fabrication and nanowires in a high-performance electronic device is described this week in the journal Nature.

"By using common, lightweight and low-cost materials such as glass or even plastic as substrates, these nanowire circuits could make computing devices ubiquitous, allowing powerful electronics to permeate all aspects of living," says Lieber, the Mark Hyman Jr. Professor of Chemistry in Harvard's Faculty of Arts and Sciences. "Because this technique can create a high-quality circuit at low temperatures, it could be a technology that finally decouples quality electronics from single crystal silicon wafers, which are resilient during high-temperature fabrication but also very expensive."

Lieber, Ham and colleagues used their technique to produce nanowire-based logical inverters and ring oscillators, which are inverters in series. The ring oscillator devices, which are critical for virtually all digital electronics, performed considerably better than comparable ring oscillators produced at low temperatures using organic semiconductors, achieving a speed roughly 20 times faster. The nanowire-derived ring oscillators reached a speed of 11.7 megahertz, outpacing by a factor of roughly 10,000 the excruciatingly slow performance attained by other nanomaterial circuits.

"These nanowire circuits' performance was impressive," says Ham, assistant professor of electrical engineering in Harvard's Division of Engineering and Applied Sciences. "This finding gives us confidence that we can ramp up these elementary circuits to build more complex devices, which is something we now plan to do."

Lieber and Ham say these functional nanowire circuits demonstrate nanomaterials' potential in electronics applications. The circuits could be used in devices such as low-cost radio-frequency tags and fully integrated high-refresh-rate displays, the scientists write in Nature; on a larger scale, such circuits could provide a foundation for more complex nanoelectronics. The technique Lieber and Ham used to produce a nanowire-based circuit on a glass substrate is also compatible with other commonplace materials such as plastics, broadening its potential applicability.


http://www.harvard.edu/

Scientists Engineer Cellular Circuits That Count Events


MIT and Boston University engineers have designed cells that can count and "remember" cellular events, using simple circuits in which a series of genes are activated in a specific order.
Such circuits, which mimic those found on computer chips, could be used to count the number of times a cell divides, or to study a sequence of developmental stages. They could also serve as biosensors that count exposures to different toxins.

The team developed two types of cellular counters, both described in the May 29 issue of Science. Though the cellular circuits resemble computer circuits, the researchers are not trying to create tiny living computers.

"I don't think computational circuits in biology will ever match what we can do with a computer," said Timothy Lu, a graduate student in the Harvard-MIT Division of Health Sciences and Technology (HST) and one of two lead authors of the paper.

Performing very elaborate computing inside cells would be extremely difficult because living cells are much harder to control than silicon chips. Instead, the researchers are focusing on designing small circuit components to accomplish specific tasks.

"Our goal is to build simple design tools that perform some aspect of cellular function," said Lu.

Ari Friedland, a graduate student at Boston University, is also a lead author of the Science paper. Other authors are Xiao Wang, postdoctoral associate at BU; David Shi, BU undergraduate; George Church, faculty member at Harvard Medical School and HST; and James Collins, professor of biomedical engineering at BU.

Learning to count

To demonstrate their concept, the team built circuits that count up to three cellular events, but in theory, the counters could go much higher.

The first counter, dubbed the RTC (Riboregulated Transcriptional Cascade) Counter, consists of a series of genes, each of which produces a protein that activates the next gene in the sequence.

With the first stimulus — for example, an influx of sugar into the cell — the cell produces the first protein in the sequence, an RNA polymerase (an enzyme that controls transcription of another gene). During the second influx, the first RNA polymerase initiates production of the second protein, a different RNA polymerase.

The number of steps in the sequence is, in theory, limited only by the number of distinct bacterial RNA polymerases. "Our goal is to use a library of these genes to create larger and larger cascades," said Lu.

The counter's timescale is minutes or hours, making it suitable for keeping track of cell divisions. Such a counter would be potentially useful in studies of aging.

The RTC Counter can be "reset" to start counting the same series over again, but it has no way to "remember" what it has counted. The team's second counter, called the DIC (DNA Invertase Cascade) Counter, can encode digital memory, storing a series of "bits" of information.

The process relies on an enzyme known as invertase, which chops out a specific section of double-stranded DNA, flips it over and re-inserts it, altering the sequence in a predictable way.

The DIC Counter consists of a series of DNA sequences. Each sequence includes a gene for a different invertase enzyme. When the first activation occurs, the first invertase gene is transcribed and assembled. It then binds the DNA and flips it over, ending its own transcription and setting up the gene for the second invertase to be transcribed next.

When the second stimulus is received, the cycle repeats: The second invertase is produced, then flips the DNA, setting up the third invertase gene for transcription. The output of the system can be determined when an output gene, such as the gene for green fluorescent protein, is inserted into the cascade and is produced after a certain number of inputs or by sequencing the cell's DNA.

This circuit could in theory go up to 100 steps (the number of different invertases that have been identified). Because it tracks a specific sequence of stimuli, such a counter could be useful for studying the unfolding of events that occur during embryonic development, said Lu.

Other potential applications include programming cells to act as environmental sensors for pollutants such as arsenic. Engineers would also be able to specify the length of time an input needs to be present to be counted, and the length of time that can fall between two inputs so they are counted as two events instead of one.

They could also design the cells to die after a certain number of cell divisions or night-day cycles.

"There's a lot of concern about engineered organisms — if you put them in the environment, what will happen?" said Collins, who is also a Howard Hughes Medical Institute investigator. These counters "could serve as a programmed expiration date for engineered organisms."

The research was funded by the National Institute of Health Director's Pioneer Award Program, the National Science Foundation FIBR program, and the Howard Hughes Medical Institute.


http://web.mit.edu/

Superconducting Chips To Become Reality


Most chemical elements become superconducting at low temperatures or high pressures, but until now, copper, silver, gold, and the semiconductor germanium, for example, have all refused superconductivity. Scientists at the Forschungszentrum Dresden-Rossendorf (FZD) research center were now able to produce superconducting germanium for the first time. Furthermore, they could unravel a few of the mysteries which come along with superconducting semiconductors.
Superconductors are substances that conduct electricity without losses when cooled down to very low temperatures. Pure semiconductors, like silicon or germanium, are almost non-conducting at low temperatures, but transform into conducting materials after doping with foreign atoms. An established method of doping is ion implantation (ions = charged atoms) by which foreign ions are embedded into the crystal lattice of a semiconductor. To produce a superconducting semiconductor, an extreme amount of foreign atoms are necessary, even more than the substance would usually be able to absorb. At the FZD, germanium samples were doped with about six gallium atoms per 100 germanium atoms. With these experiments, the scientists could prove indeed that the doped germanium layer of only sixty nanometers thickness became superconducting, and not just the clusters of foreign atoms which could easily form during extreme doping .

As the germanium lattice is heavily damaged by ion implantation, it has to be repaired afterwards. For such purposes, a flash-lamp annealing facility has been developed at the FZD. Its application allows for a repair of the destroyed crystal lattice by rapidly heating the sample surface (within few milliseconds) while the distribution of the dopant atoms is kept almost the same.

From a scientific point of view, the new material is very promising. It exhibits a surprisingly high critical magnetic field with respect to the temperature where the substance becomes superconducting. For many materials, superconductivity occurs only at very low temperatures, slightly above the absolute zero point of -273 degrees Celsius or 0 Kelvin. The gallium doped germanium samples become superconducting at about 0.5 Kelvin; however, the FZD researchers expect the temperature to increase further by changing various parameters during ion implantation or annealing.

Physicists have been dreaming about superconducting semiconductors for a long time, but saw only few chances for the semiconductor germanium to become superconducting at all. Germanium used to be the material for the first generation of transistors; however, it was soon replaced by silicon, the current material for microelectronics. Recently, the “old” semiconductor material germanium has aroused more and more interest, as it allows, compared to silicon, for more rapid circuits.

Experts even believe germanium to be rediscovered for micro- and nanoelectronics. The reason for such a renaissance lies in the fact that miniaturization in microelectronics industry using silicon is coming to an end. Today, extremely thin oxide layers are needed for transistors, down to a level where silicon oxide does not work well any more. Germanium as a new material for chips would come along with two big advantages: it would enable both faster processes and further miniaturization in micro- and nanoelectronics. Superconducting germanium could thus help to realize circuits for novel computers.

The scientists at the Forschungszentrum Dresden-Rossendorf followed a targeted approach when searching for a new superconducting semiconductor. Instead of doping with boron, which had resulted in superconducting silicon two years ago in France, the scientists choose gallium because of its higher solubility in germanium. In many systematic experiments they proved that the superconductivity of germanium can be reproduced. Furthermore, they were able to show that the transition temperature marking the start of superconductivity can be raised within certain limits.

In the future, the scientists at the two FZD institutes “Ion Beam Physics and Materials Research” and “Dresden High Magnetic Field Laboratory” will combine their know-how in order to fine-tune different rather complex parameters for further experiments, thus hopefully discovering further mysteries of superconducting semiconductors.

Milestone For 3D Mobile Video And Gaming


MicroOLED, a developer of efficient organic light emitting diode technologies (OLED),has announced the release of a new high-definition multimedia interface allowing its high-resolution microdisplays to connect to the Texas Instruments Incorporated (TI) OMAP™ platform.

A groundbreaking innovation for mobile gaming and video entertainment, the new interface enables 3D video or 3D gaming while using specially-designed video glasses. Leveraging a single HDMI connection to the mobile phone, the solution generates both left and right SD video streams onto the microdisplays embedded within the glasses, thus allowing gamers and video enthusiasts to view and/or interact with their favorite multimedia content while on the go.

The new system also features the MicroOLED wide video graphics array plus (WVGA+) high-resolution OLED microdisplay withRGB video interface. This microdisplay is based on MicroOLED’s proprietary OLED-on-CMOS (Complementary Metal-Oxide-Semiconductor) technology, which delivers high-resolution video while offering an extremely small footprint, low power consumption and outstanding image picture quality. This advancement makes the technology ideally suited for high-end video glasses that support best in class 3D image quality and mobile entertainment, whether at home or on the move.

MicroOLED’s technology successfully connects to TI’s proven OMAP platform via a single HDMI connection, delivering optimal processing performance to decode high definition video streams along with power from which MicroOLED’s technology generates two 873 x 500 pixels videos. This dual-technology combination will empower mobile telecommunications carriers to sell full, DVD-quality 3D content on their video-on-demand portals for mobile applications. The result of this effort is the creation of technologies for 3D mobile devices and applications enabling life-like user experience.

“By integrating our energy-efficient microdisplay into 3D video glasses and this 3D interface, we are enabling a full range of new mobile entertainment applications ranging from 3D gaming to HD mobile video. This is made possible only by combining TI’s OMAP platform and MicroOLED’s microdisplays, two leading technologies that deliver low power consumption and high performance,” explained Eric Marcellin-Dibon, CEO of MICROOLED.


http://www.cea.fr/

Breakthrough For Post-4G Communications


With much of the mobile world yet to migrate to 3G mobile communications, let alone 4G, European researchers are already working on a new technology able to deliver data wirelessly up to 12.5Gb/s.

The technology – known as ‘millimetre (mm)-wave’ or microwave photonics – has commercial applications not just in telecommunications (access and in-house networks) but also in instrumentation, radar, security, radio astronomy and other fields.

Despite the quantum leap in performance made possible by combining the latest radio and optics technologies to produce mm-wave components, it will probably only be a few years before there are real benefits for the average EU citizen.

This is thanks to research and development work being done by the EU-funded project IPHOBAC, which brings together partners from both academia and industry with the aim of developing a new class of components and systems for mm-wave applications.

The mm-wave band is the extremely high frequency part of the radio spectrum, from 30 to 300 gigahertz (GHz), and it gets it name from having a wavelength of one to 10mm. Until now, the band has been largely undeveloped, so the new technology makes available for exploitation more of the scarce and much-in-demand spectrum.

New products from Europe

IPHOBAC is not simply a ‘paper project’ where the technology is researched, but very much a practical exercise to develop and commercialise a new class of products with a ‘made in Europe’ label on them.

While several companies in Japan and the USA have been working on merging optical and radio frequency technologies, IPHOBAC is the world’s first fully integrated effort in the field, with a lot of different companies involved. This has resulted in the three-year project, which runs until end-2009, already having an impressive list of achievements to its name.

It recently unveiled a tiny component, a transmitter able to transmit a continuous signal not only through the entire mm-wave band but beyond. Its full range is 30 to 325GHz and even higher frequency operation is now under investigation. The first component worldwide able to deliver that range of performance, it will be used in both communications and radar systems. Other components developed by the project include 110GHz modulators, 110GHz photodetectors, 300GHz dual-mode lasers, 60GHz mode-locked lasers, and 60GHz transceivers.

Truly disruptive technology

Project coordinator Andreas Stöhr says millimetre-wave photonics is a truly disruptive technology for high frequency applications. “It offers unique capabilities such as ultra-wide tunability and low-phase noise which are not possible with competing technologies, such as electronics,” he says.

What this will mean in practical terms is not only ultra-fast wireless data transfer over telecommunications networks, but also a whole range of new applications (http://www.iphobac-survey.org).

One of these, a 60GHz Photonic Wireless System, was demonstrated at the ICT 2008 exhibition in Lyon and was voted into the Top Ten Best exhibits. The system allows wireless connectivity in full high definition (HD) between devices in the home, such as a set-top box, TV, PC, and mobile devices. It is the first home area network to demonstrate the speeds necessary for full wireless HD of up to 3Gb/s.

The system can also be used to provide multi-camera coverage of live events in HD. “There is no time to compress the signal as the director needs to see live feed from every camera to decide which picture to use, and ours is the only technology which can deliver fast enough data rates to transmit uncompressed HD video/audio signals,” says Stöhr.

The same technology has been demonstrated for access telecom networks and has delivered world record data rates of up to 12.5Gb/s over short- to medium-range wireless spans, or 1500 times the speed of upcoming 4G mobile networks.

One way in which the technology can be deployed in the relatively short term, according to Stöhr, is wirelessly supporting very fast broadband to remote areas. “You can have your fibre in the ground delivering 10Gb/s but we can deliver this by air to remote areas where there is no fibre or to bridge gaps in fibre networks,” he says.

Systems for outer space

The project is also developing systems for space applications, working with the European Space Agency. Stöhr said he could not reveal details as this has not yet been made public, save to say the systems will operate in the 100GHz band and are needed immediately.

There are various ongoing co-operation projects with industry to commercialise the components and systems, and some components are already at a pre-commercial stage and are being sold in limited numbers. There are also ongoing talks with some of the biggest names in telecommunications, including Siemens, Ericsson, Thales Communications and Malaysia Telecom.

“In just a few years time everybody will be able to see the results of the IPHOBAC project in telecommunications, in the home, in radio astronomy and in space. It is a completely new technology which will be used in many applications even medical ones where mm-wave devices to detect skin cancer are under investigation,” says Stöhr.


http://cordis.europa.eu./ictresults/

Next Generation Wireless Chips


The Mathematical Institute of the University of Cologne conducts research within in the European project ICESTARS (Integrated Circuit/Electromagnetic Simulation and design Technologies for Advanced Radio Systems-on-chip). New mathematical algorithms for the next radio chip generation will be developed under the leadership of Prof. Dr. Caren Tischendorf.
According to Prof. Tischendorf: "In the future, mobile devices will provide customers with services ranging from telephony and internet to mobile TV and remote banking, anytime, anywhere. It is impossible to realize the necessary, extremely high data transfer rates within the frequency bands used today (approximately 1-3GHz)." The project serves to enable the development of low-cost wireless chips that can operate in a frequency range of up to 100GHz.

The leader of the ICESTARS project, Marq Kole of NXP Semiconductors says: "By the end of the project in 2010 we aim to have accelerated the chip development process in the extremely high frequency range by new methods and simulation tools in order to actively maintain the European chip developers on a top position over the whole spectrum of wireless communications." The ICESTARS project is funded by the European Commission within the EU 7th framework program and lead by Dutch company NXP Semiconductors. The German semiconductor company Qimonda will develop advanced analog simulation techniques in the framework of this project.

Additional partners are the software developing companies AWR-APLAC from Finland with a focus onto frequency-domain simulation algorithms and MAGWEL from Belgium with a focus onto electromagnetic simulations. Besides the University of Cologne, the university partners Upper Austria University of Applied Sciences, the University of Wuppertal from Germany and the University of Oulu from Finland are concentrating on modeling questions, algorithmic problems and simulation issues to be solved for a robust and accelerated automated testing of analog circuits with digital signal processing in the extremely high frequency range.


http://www.uni-koeln.de/

New Radio Chip Mimics Human Ear


MIT engineers have built a fast, ultra-broadband, low-power radio chip, modeled on the human inner ear, that could enable wireless devices capable of receiving cell phone, Internet, radio and television signals.
Rahul Sarpeshkar, associate professor of electrical engineering and computer science, and his graduate student, Soumyajit Mandal, designed the chip to mimic the inner ear, or cochlea. The chip is faster than any human-designed radio-frequency spectrum analyzer and also operates at much lower power.

"The cochlea quickly gets the big picture of what's going on in the sound spectrum," said Sarpeshkar. "The more I started to look at the ear, the more I realized it's like a super radio with 3,500 parallel channels."

Sarpeshkar and his students describe their new chip, which they have dubbed the "radio frequency (RF) cochlea," in a paper to be published in the June issue of the IEEE Journal of Solid-State Circuits. They have also filed for a patent to incorporate the RF cochlea in a universal or software radio architecture that is designed to efficiently process a broad spectrum of signals including cellular phone, wireless Internet, FM, and other signals.

The RF cochlea mimics the structure and function of the biological cochlea, which uses fluid mechanics, piezoelectrics and neural signal processing to convert sound waves into electrical signals that are sent to the brain.

As sound waves enter the cochlea, they create mechanical waves in the cochlear membrane and the fluid of the inner ear, activating hair cells (cells that cause electrical signals to be sent to the brain). The cochlea can perceive a 100-fold range of frequencies -- in humans, from 100 to 10,000 Hz. Sarpeshkar used the same design principles in the RF cochlea to create a device that can perceive signals at million-fold higher frequencies, which includes radio signals for most commercial wireless applications.

The device demonstrates what can happen when researchers take inspiration from fields outside their own, says Sarpeshkar.

"Somebody who works in radio would never think of this, and somebody who works in hearing would never think of it, but when you put the two together, each one provides insight into the other," he says. For example, in addition to its use for radio applications, the work provides an analysis of why cochlear spectrum analysis is faster than any known spectrum-analysis algorithm. Thus, it sheds light on the mechanism of hearing as well.

The RF cochlea, embedded on a silicon chip measuring 1.5 mm by 3 mm, works as an analog spectrum analyzer, detecting the composition of any electromagnetic waves within its perception range. Electromagnetic waves travel through electronic inductors and capacitors (analogous to the biological cochlea's fluid and membrane). Electronic transistors play the role of the cochlea's hair cells.

The analog RF cochlea chip is faster than any other RF spectrum analyzer and consumes about 100 times less power than what would be required for direct digitization of the entire bandwidth. That makes it desirable as a component of a universal or "cognitive" radio, which could receive a broad range of frequencies and select which ones to attend to.

Biological inspiration

This is not the first time Sarpeshkar has drawn on biology for inspiration in designing electronic devices. Trained as an engineer but also a student of biology, he has found many similar patterns in the natural and man-made worlds. For example, Sarpeshkar's group, in MIT's Research Laboratory of Electronics, has also developed an analog speech-synthesis chip inspired by the human vocal tract and a novel analysis-by-synthesis technique based on the vocal tract. The chip's potential for robust speech recognition in noise and its potential for voice identification have several applications in portable devices and security applications.

The researchers have built circuits that can analyze heart rhythms for wireless heart monitoring, and are also working on projects inspired by signal processing in cells. In the past, his group has worked on hybrid analog-digital signal processors inspired by neurons in the brain.

Sarpeshkar says that engineers can learn a great deal from studying biological systems that have evolved over hundreds of millions of years to perform sensory and motor tasks very efficiently in noisy environments while using very little power.

"Humans have a long way to go before their architectures will successfully compete with those in nature, especially in situations where ultra-energy-efficient or ultra-low-power operation are paramount," he said. Nevertheless, "We can mine the intellectual resources of nature to create devices useful to humans, just as we have mined her physical resources in the past.


http://www.mit.edu/

Could Violent Video Games Reduce Rather Than Increase Violence


Does playing violent video games make players aggressive? It is a question that has taxed researchers, sociologists, and regulators ever since the first console was plugged into a TV and the first shots fired in a shoot 'em up game.


Writing May 14 in the International Journal of Liability and Scientific Enquiry, Patrick Kierkegaard of the University of Essex, England, suggests that there is scant scientific evidence that video games are anything but harmless and that they do not lead to real world aggression. Moreover, his research shows that previous work is biased towards the opposite conclusion.

Video games have come a long way since the simplistic ping-pong and cascade games of the early 1970s, the later space-age Asteroids and Space Invaders, and the esoteric Pac-man. Today, severed limbs, drive-by shootings, and decapitated bodies captivate a new generation of gamers and gruesome scenes of violence and exploitation are the norm.

Award-winning video games, such as the Grand Theft Auto series, thrive on murder, theft, and destruction on every imaginable level explains Kierkegaard, and gamers boost their chances of winning the game by a virtual visit to a prostitute with subsequent violent mugging and recovery of monies exchanged. Games such as '25 To Life' remain controversial with storylines involving violent gangs taking hostages and killing cops, while games such as World of Warcraft and Doom are obviously unrelated to the art of crochet or gentle country walks.

Kierkegaard points out that these violent games are growing more realistic with each passing year and most relish their plots of violence, aggression and gender bias. But, he asks, "Is there any scientific evidence to support the claims that violent games contribute to aggressive and violent behaviour?"

Media scare stories about gamers obsessed with violent games and many research reports that claim to back up the idea that virtual violence breeds real violence would seem to suggest so. However, Kierkegaard has studied a range of such research papers several of which have concluded since the early 1980s that video games can lead to juvenile delinquency, fighting at school and during free play periods and violent criminal behaviour such as assault and robbery. Evidence from brain scans carried out while gamers play also seem to support a connection between playing video games and activation of regions of the brain associated with aggression.

However, Kierkegaard explains, there is no obvious link between real-world violence statistics and the advent of video games. If anything, the effect seems to be the exact opposite and one might argue that video game usage has reduced real violence. Despite several high profile incidents in US academic institutions, "Violent crime, particularly among the young, has decreased dramatically since the early 1990s," says Kierkegaard, "while video games have steadily increased in popularity and use. For example, in 2005, there were 1,360,088 violent crimes reported in the USA compared with 1,423,677 the year before. "With millions of sales of violent games, the world should be seeing an epidemic of violence," he says, "Instead, violence has declined."

Research is inconclusive, emphasises Kierkegaard. It is possible that certain types of video game could affect emotions, views, behaviour, and attitudes, however, so can books, which can lead to violent behaviour on those already predisposed to violence. The inherent biases in many of the research studies examined by Kierkegaard point to a need for a more detailed study of video games and their psychological effects.

http://www.inderscience.com/

Violent Video Game Feed Aggression In Kids


A new study -- presented last month at the inaugural seminar sponsored by Iowa State University's Center for the Study of Violence -- showed effects of violent video games on aggression over a 3-6 month period in children from Japan as well as the United States.


ISU Distinguished Professor of Psychology Craig Anderson -- director of the Center for the Study of Violence -- presented the results from the study, which is published in the November issue of Pediatrics, the professional journal of the American Academy of Pediatrics.

The research links an earlier ISU study of 364 American children ages 9-12 with two similar studies of more than 1,200 children between the ages of 12-18 from Japan. It found that exposure to violent video games was a causal risk factor for aggression and violence in those children.

"Basically what we found was that in all three samples, a lot of violent video game play early in a school year leads to higher levels of aggression during the school year, as measured later in the school year -- even after you control for how aggressive the kids were at the beginning of the year," said Anderson, who was recently elected president-elect for the International Society for Research on Aggression (IRSA).

ISU Assistant Professor of Psychology Douglas Gentile, the center's associate director, and Akira Sakamoto -- an associate professor of psychology at Ochanomizu University and a leading violent video games researcher from Japan -- collaborated with Anderson and additional Japanese researchers on the study.

Studying kids video game habits and aggression

Researchers assessed the children's video game habits and their level of physical aggression against each other at two different times during the school year.

"The studies varied somewhat in the length of time between what we're calling time one and time two (times between the reports of video game use and physical behavior)," Anderson said. "The shortest duration was three months and the longest was six months.

"Each of the three samples showed significant increases in aggression by children who played a lot of violent video games," he said.

Anderson began collaborating with Japanese researchers on the study several years ago when he visited Japan to give an invited address at the International Simulation and Gaming Association convention. He says Japan's cultural differences with the U.S. made it attractive for the comparison studies.

"The culture is so different and their overall violence rate is so much lower than in the U.S.," Anderson said. "The argument has been made -- it's not a very good argument, but it's been made by the video game industry -- that all our research on violent video game effects must be wrong because Japanese kids play a lot of violent video games and Japan has a low violence rate.

"By gathering data from Japan, we can test that hypothesis directly and ask, 'Is it the case that Japanese kids are totally unaffected by playing violent video games?' And of course, they aren't," he said. "They're affected pretty much the same way American kids are."

"It is important to realize that violent video games do not create schools shooters," Gentile said. "They create opportunities to be vigilant for enemies, to practice aggressive ways of responding to conflict and to see aggression as acceptable. In practical terms, that means that when bumped in the hallway, children begin to see it as hostile and react more aggressively in response to it. Violent games are certainly not the only thing that can increase children's aggression, but these studies show that they are one part of the puzzle in both America and Japan."

Violence Desensitization From Video Games


Nicholas Carnagey, an Iowa State psychology instructor and research assistant, and ISU Distinguished Professor of Psychology Craig Anderson collaborated on the study with Brad Bushman, a former Iowa State psychology professor now at the University of Michigan, and Vrije Universiteit, Amsterdam.

They authored a paper titled "The Effects of Video Game Violence on Physiological Desensitization to Real-Life Violence," which was published in the current issue of the Journal of Experimental Social Psychology. In this paper, the authors define desensitization to violence as "a reduction in emotion-related physiological reactivity to real violence."

Their paper reports that past research -- including their own studies -- documents that exposure to violent video games increases aggressive thoughts, angry feelings, physiological arousal and aggressive behaviors, and decreases helpful behaviors. Previous studies also found that more than 85 percent of video games contain some violence, and approximately half of video games include serious violent actions.

The methodology

Their latest study tested 257 college students (124 men and 133 women) individually. After taking baseline physiological measurements on heart rate and galvanic skin response -- and asking questions to control for their preference for violent video games and general aggression -- participants played one of eight randomly assigned violent or non-violent video games for 20 minutes. The four violent video games were Carmageddon, Duke Nukem, Mortal Kombat or Future Cop; the non-violent games were Glider Pro, 3D Pinball, 3D Munch Man and Tetra Madness.

After playing a video game, a second set of five-minute heart rate and skin response measurements were taken. Participants were then asked to watch a 10-minute videotape of actual violent episodes taken from TV programs and commercially-released films in the following four contexts: courtroom outbursts, police confrontations, shootings and prison fights. Heart rate and skin response were monitored throughout the viewing.

The physical differences

When viewing real violence, participants who had played a violent video game experienced skin response measurements significantly lower than those who had played a non-violent video game. The participants in the violent video game group also had lower heart rates while viewing the real-life violence compared to the nonviolent video game group.

"The results demonstrate that playing violent video games, even for just 20 minutes, can cause people to become less physiologically aroused by real violence," said Carnagey. "Participants randomly assigned to play a violent video game had relatively lower heart rates and galvanic skin responses while watching footage of people being beaten, stabbed and shot than did those randomly assigned to play nonviolent video games.

"It appears that individuals who play violent video games habituate or 'get used to' all the violence and eventually become physiologically numb to it."

Participants in the violent versus non-violent games conditions did not differ in heart rate or skin response at the beginning of the study, or immediately after playing their assigned game. However, their physiological reactions to the scenes of real violence did differ significantly, a result of having just played a violent or a non-violent game. The researchers also controlled for trait aggression and preference for violent video games.

The researchers' conclusion

They conclude that the existing video game rating system, the content of much entertainment media, and the marketing of those media combine to produce "a powerful desensitization intervention on a global level."

"It (marketing of video game media) initially is packaged in ways that are not too threatening, with cute cartoon-like characters, a total absence of blood and gore, and other features that make the overall experience a pleasant one," said Anderson. "That arouses positive emotional reactions that are incongruent with normal negative reactions to violence. Older children consume increasingly threatening and realistic violence, but the increases are gradual and always in a way that is fun.

"In short, the modern entertainment media landscape could accurately be described as an effective systematic violence desensitization tool," he said. "Whether modern societies want this to continue is largely a public policy question, not an exclusively scientific one."

The researchers hope to conduct future research investigating how differences between types of entertainment -- violent video games, violent TV programs and films -- influence desensitization to real violence. They also hope to investigate who is most likely to become desensitized as a result of exposure to violent video games.

"Several features of violent video games suggest that they may have even more pronounced effects on users than violent TV programs and films," said Carnagey.

Violent Video Games And Hostile Personalities


ISU Distinguished Professor of Psychology Craig Anderson, Assistant Professor of Psychology Douglas Gentile, and doctoral student Katherine Buckley share the results of three new studies in their book, "Violent Video Game Effects on Children and Adolescents" (Oxford University Press, 2007). It is the first book to unite empirical research and public policy related to violent video games.

Study One: kids' games still have behavioral effect

The book's first study found that even exposure to cartoonish children's violent video games had the same short-term effects on increasing aggressive behavior as the more graphic teen (T-rated) violent games. The study tested 161 9- to 12-year-olds, and 354 college students. Each participant was randomly assigned to play either a violent or non-violent video game. "Violent" games were defined as those in which intentional harm is done to a character motivated to avoid that harm. The definition was not an indication of the graphic or gory nature of any violence depicted in a game.

The researchers selected one children's non-violent game ("Oh No! More Lemmings!"), two children's violent video games with happy music and cartoonish game characters ("Captain Bumper" and "Otto Matic"), and two violent T-rated video games ("Future Cop" and "Street Fighter"). For ethical reasons, the T-rated games were played only by the college-aged participants.

The participants subsequently played another computer game designed to measure aggressive behavior in which they set punishment levels in the form of noise blasts to be delivered to another person participating in the study. Additional information was also gathered on each participant's history of violent behavior and previous violent media viewing habits.

The researchers found that participants who played the violent video games -- even if they were children's games -- punished their opponents with significantly more high-noise blasts than those who played the non-violent games. They also found that habitual exposure to violent media was associated with higher levels of recent violent behavior -- with the newer interactive form of media violence found in video games more strongly related to violent behavior than exposure to non-interactive media violence found in television and movies.

"Even the children's violent video games -- which are more cartoonish and often show no blood -- had the same size effect on children and college students as the much more graphic games have on college students," said Gentile. "What seems to matter is whether the players are practicing intentional harm to another character in the game. That's what increases immediate aggression -- more than how graphic or gory the game is."

Study Two: the violent video game effect

Another study detailed in the book surveyed 189 high school students. The authors found that respondents who had more exposure to violent video games held more pro-violent attitudes, had more hostile personalities, were less forgiving, believed violence to be more typical, and behaved more aggressively in their everyday lives. The survey measured students' violent TV, movie and video game exposure; attitudes toward violence; personality trait hostility; personality trait forgiveness; beliefs about the normality of violence; and the frequency of various verbally and physically aggressive behaviors.

The researchers were surprised that the relation to violent video games was so strong.

"We were surprised to find that exposure to violent video games was a better predictor of the students' own violent behavior than their gender or their beliefs about violence," said Anderson. "Although gender aggressive personality and beliefs about violence all predict aggressive and violent behavior, violent video game play still made an additional difference.

"We were also somewhat surprised that there was no apparent difference in the video game violence effect between boys and girls or adolescents with already aggressive attitudes," he said.

The study found that one variable -- trait forgiveness -- appeared to make that person less affected by exposure to violent video games in terms of subsequent violent behavior, but this protective effect did not occur for less extreme forms of physical aggression.

Study Three: violent video games and school

A third new study in the book assessed 430 third-, fourth- and fifth-graders, their peers, and their teachers twice during a five-month period in the school year. It found that children who played more violent video games early in the school year changed to see the world in a more aggressive way, and became more verbally and physically aggressive later in the school year -- even after controlling for how aggressive they were at the beginning of the study. Higher aggression and lower pro-social behavior were in turn related to those children being more rejected by their peers.

"I was startled to find those changes in such a short amount of time," said Gentile. "Children's aggression in school did increase with greater exposure to violent video games, and this effect was big enough to be noticed by their teachers and peers within five months."

The study additionally found an apparent lack of "immunity" to the effects of media violence exposure. TV and video game screen time was also found to be a significant negative predictor of grades.

The book's final chapter offers "Helpful Advice for Parents and Other Caregivers on Choosing and Using Video Games." The authors say that providing clear, science-based information to parents and caregivers about the harmful effects of exposure to violent video games is the first step in helping educate the people who are best able to use the information.

Anderson and Gentile will present their findings at the Society for Research in Child Development Biennial Meeting in Boston March 29 through April 1.

Video Games Are Exemplary Aggression Teachers


Like other fathers and sons, Douglas Gentile and his father have spent many hours arguing about video games. What makes them different is that Douglas, an Iowa State University assistant professor of psychology, is one of the country's top researchers on the effects of media on children. His father, J. Ronald Gentile, is a leading researcher on effective teaching and a distinguished teaching professor emeritus of educational psychology at the University of Buffalo, State University of New York.

Through their discussions, they realized that video games use the same techniques that really great teachers use.

"That realization prompted us to ask the question, 'Should we therefore be surprised that violent video games could teach aggression to players?'" said Doug Gentile, who is also director of research for the National Institute on Media and the Family.

Violent video games teach aggression

The Gentiles decided to test that hypothesis. Through a study of nearly 2,500 youths, they found that video games are indeed effective teaching tools. Students who played multiple violent video games actually learned through those games to produce greater hostile actions and aggressive behaviors over a span of six months.

"We know a lot about how to be an effective teacher, and we know a lot about how to use technology to teach," said lead author Douglas Gentile. "Video games use many of these techniques and are highly effective teachers. So we shouldn't be surprised that violent video games can teach aggression."

The paper presents conceptual and empirical analyses of several of the "best practices" of learning and instruction, and demonstrates how violent video games use those practices effectively to teach aggression. It documents how violent video games motivate learners to persevere in learning and mastering skills to navigate through complex problems and changing environments -- just like good teachers do.

The study describes seven parallels between video games and effective teachers, including the ability to adapt to the level of each individual learner -- requiring practice distributed across time -- and teaching for transfer to real-world situations.

Studying nearly 2,500 youths

To test their hypothesis, the Gentiles studied three groups of youths -- 430 third through fifth-graders; 607 eighth and ninth graders; and 1,441 older adolescents with an average age of 19. Elementary and middle school children were recruited from nine Minnesota schools, and older adolescents from Iowa State University.

In the longitudinal elementary school sample, students, their peers, and their teachers completed surveys at two points during the school year. The surveys assessed the subject's aggressive thoughts and self-reported fights, and their media habits -- including violent video game exposure. Teachers and peers were also asked to rate the participants' aggressive behavior.

Controlling for age, race, sex, total amount of time spent playing all video games, and prior aggressive behaviors, the research found that the amount of rated violence in the games played predicted increased aggression. Among elementary students, playing multiple violent video games increased their risk of being highly aggressive -- as rated by peers and teachers -- by 73 percent, when compared to those who played a mix of violent and non-violent games, and by 263 percent compared to those who played only non-violent games.

"Because we had longitudinal data, we were able to show that students who play multiple violent games actually changed to have a greater hostile attribution bias, which also increased their aggressive behaviors over prior levels," the researchers wrote.

And because learning occurs from video games, regardless of whether the effects are intentional or unintentional, the Gentiles added that this "should make us more thoughtful about designing games and choosing games for children and adolescents to play."

But this study is not all bad news for video game technology. Because video games were found to be such effective teaching tools, the Gentiles propose greater educational use of today's smarter technology found in those games -- technology that "thinks" along with students, adapting instruction to each student's current skills, strategies or mistakes.

While some schools are already incorporating this type of educational programming, the researchers report that it's not widely used. The authors urge educators not to wait for more advancement before using such technology with students in the classroom.

They co-authored a paper on their research titled "Violent Video Games as Exemplary Teachers: A Conceptual Analysis," which will be published in an upcoming issue of the Journal of Youth and Adolescence. It is already available online to the journal's subscribers.

http://www.iastate.edu/

Many Girls Play Violent Video Games


A new study by researchers at the Massachusetts General Hospital's (MGH) Center for Mental Health and Media dispels some myths and uncovers some surprises about young teens and violent video and computer games. The study, published in the July issue of Journal of Adolescent Health, is the first to ask middle-school youth in detail about the video and computer games they play and to analyze how many of those titles are rated M (Mature -- meant for ages 17 and up). It is also the first to ask children why they play video games.

Some of the more striking findings include:

Almost all young teens play video games. Just six percent of the sample had not played any electronic games in the previous six months.
Most 7th and 8th graders (ages 12 to 14) regularly play violent video games. Two-thirds of boys and more than one in four girls reported playing at least one M-rated game "a lot in the past six months."
A third of boys and one in ten girls play video or computer games almost every day.
Many children are playing video games to manage their feelings, including anger and stress. Children who play violent games are more likely to play to get their anger out. They are also more likely to play games with strangers on the Internet.
"Contrary to the stereotype of the solitary gamer with no social skills, we found that children who play M-rated games are actually more likely to play in groups -- in the same room, or over the Internet," says Cheryl K. Olson, ScD, co-director of the Center for Mental Health and Media and lead author of the study. "Boys' friendships in particular often center around video games."

At a time when the availability of M-rated games is on the rise, it is important to explore their effects on the children who play them, the researchers note. This study adds valuable insights into the everyday lives of young teens: who they're playing with, where, how much, and why. Olson's team found that Grand Theft Auto -- rated M for blood and gore, intense violence, strong language, strong sexual content, and use of drugs -- was the most popular game series among the boys surveyed. Surprisingly, it was also the second most popular series among the girls after The Sims, a game that simulates the activities of a virtual family; one in five girls aged 12 to 14 had played Grand Theft Auto "a lot in the past six months."

This study had a large sample consisting of 1,254 children from two states and an extremely high response rate, as virtually every eligible child who attended participating schools on the survey day took part. Children surveyed came from various socioeconomic, racial/ethnic and geographic groups, so these findings may represent the average middle-school child.

Many policy proposals at the state and national level focus on reducing children's access to M-rated games. Because so many participants played violent games, this study could give further ammunition to game critics. "But violent game play is so common, and youth crime has actually declined, so most kids who play these games occasionally are probably doing fine," Olson says. "We hope that this study is a first step toward reframing the debate from 'violent games are terrible and destroying society' to 'what types of game content might be harmful to what types of kids, in what situations"' We need to take a fresh look at what types of rules or policies make sense."

Finally, the new study suggests ways that parents can limit children's use of violent games, including keeping game consoles and computers out of their bedrooms. "And watch what older family members bring home," says Olson. "Kids who play with older siblings are twice as likely to play M-rated games."

http://www.mgh.harvard.edu/

Violent Video Games Affect Boys' Biological Systems


In the study boys (12-15) were asked to play two different video games at home in the evening. The boys’ heart rate was registered, among other parameters. It turned out that the heart rate variability was affected to a higher degree when the boys were playing games focusing on violence compared with games without violent features. Differences in heart rate variability were registered both while the boys were playing the games and when they were sleeping that night. The boys themselves did not feel that they had slept poorly after having played violent games.

The results show that the autonomous nerve system, and thereby central physiological systems in the body, can be affected when you play violent games without your being aware of it. It is too early to draw conclusions about what the long-term significance of this sort of influence might be. What is important about this study is that the researchers have found a way, on the one hand, to study what happens physiologically when you play video or computer games and, on the other hand, to discern the effects of various types of games.

It is hoped that it will be possible to use the method to enhance our knowledge of what mechanisms could lie behind the association that has previously been suggested between violent games and aggressive behavior.

The researchers, from Stockholm University, Uppsala University and Karolinska Institutet in Sweden, also hope the method can be used to study how individuals are affected by playing often and for long periods, which can take the form of so-called game addiction.

An article on this research was recently published electronically in the scientific journal Acta Paediatrica.

This research on the effects of video games is funded by the Swedish Council for Working Life and Social Research (FAS) and the Oscar and Maria Ekman Philanthropic Fund.

http://www.vr.se/

Why Video Games Are Hard To Give Up


Psychologists at the University of Rochester, in collaboration with Immersyve, Inc., a virtual environment think tank, asked 1,000 gamers what motivates them to keep playing. The results published in the journal Motivation and Emotion this month suggest that people enjoy video games because they find them intrinsically satisfying.

"We think there's a deeper theory than the fun of playing," says Richard M. Ryan, a motivational psychologist at the University and lead investigator in the four new studies about gaming. Players reported feeling best when the games produced positive experiences and challenges that connected to what they know in the real world.

The research found that games can provide opportunities for achievement, freedom, and even a connection to other players. Those benefits trumped a shallow sense of fun, which doesn't keep players as interested.

"It's our contention that the psychological 'pull' of games is largely due to their capacity to engender feelings of autonomy, competence, and relatedness," says Ryan. The researchers believe that some video games not only motivate further play but "also can be experienced as enhancing psychological wellness, at least short-term," he says.

Ryan and coauthors Andrew Przybylski, a graduate student at the University of Rochester, and Scott Rigby, the president of Immersyve who earned a doctorate in psychology at Rochester, aimed to evaluate players' motivation in virtual environments. Study volunteers answered pre- and post-game questionnaires that were applied from a psychological measure based on Self-Determination Theory, a widely researched theory of motivation developed at the University of Rochester.

Rather than dissect the actual games, which other researchers have done, the Rochester team looked at the underlying motives and satisfactions that can spark players' interests and sustain them during play.

Revenues from video games—even before the latest Wii, PlayStation 3, and Xbox systems emerged—surpass the money made from Hollywood films annually. A range of demographic groups plays video games, and key to understanding their enjoyment is the motivational pull of the games.

Four groups of people were asked to play different games, including one group tackling "massively multiplayer online" games—MMO for short, which are considered the fastest growing segment of the computer gaming industry. MMOs are capable of supporting hundreds of thousands of players simultaneously. For those playing MMOs, the need for relatedness emerged "as an important satisfaction that promotes a sense of presence, game enjoyment, and an intention for future play," the researchers found.

Though different types of games and game environments were studied, Ryan points out that "not all video games are created equal" in their ability to satisfy basic psychological needs. "But those that do may be the best at keeping players coming back."

http://www.rochester.edu/

Nearly 1 In 10 Youth Gamers Addicted To Video Games


In a national Harris Poll survey of 1,178 American youths (ages 8-18), ISU Assistant Professor of Psychology Douglas Gentile found nearly one in 10 of the gamers (8.5 percent) to be pathological players according to standards established for pathological gambling -- causing family, social, school or psychological damage because of their video game playing habits.

"Although the general public uses the word 'addiction,' clinicians often report it as pathological use," said Gentile, who is also director of research for the Minneapolis-based National Institute on Media and the Family. "This is the first study to tell us the national prevalence of pathological play among youth gamers, and it is almost 1 in 10."

"What we mean by pathological use is that something someone is doing -- in this case, playing video games -- is damaging to their functioning," Gentile said. "It's not simply doing it a lot. It has to harm functioning in multiple ways."

Gentile analyzed data collected in a January 2007 Harris Poll survey. He compared respondents' video game play habits to the symptoms established in The Diagnostic and Statistical Manual of Mental Disorders for pathological gambling. Gamers were classified as "pathological" if they exhibited at least six of 11 symptoms.

The pathological gamers in the study played video games 24 hours per week, about twice as much as non-pathological gamers. They also were more likely to have video game systems in their bedrooms, reported having more trouble paying attention in school, received poorer grades in school, had more health problems, were more likely to feel "addicted," and even stole to support their habit.

The study also found that pathological gamers were twice as likely to have been diagnosed with attention problems such as Attention Deficit Disorder or Attention Deficit Hyperactivity Disorder.

Gentile was surprised to find that so many youth exhibit pathological patterns of video game play.

"I started studying video game addition in 1999 largely because I didn't believe in it," said Gentile, who is co-author of the book "Violent Video Game Effects on Children and Adolescents: Theory, Research, and Public Policy" (2007, Oxford University Press). "I assumed that parents called it 'addiction' because they didn't understand why their children spent so much time playing. So I measured the way you measure pathological gambling and the way it harms functioning, and was surprised to find that a substantial number of gamers do rise to that level (of pathological addiction)."

But now that this study provides more scientific evidence that the condition exists, the ISU psychologist emphasizes the need for further research to determine how best to treat it.

"There is still much we do not know," Gentile said. "We don't know who's most at risk, or whether this is part of a pattern of disorders. That's important because many disorders are co-morbid with others. It may be a symptom of depression, for example. And so we would want to understand that pattern of co-morbidity because that would help us know how to treat it."

Gentile is continuing his own research, currently conducting both longitudinal and clinical studies to determine risk factors and symptoms found in pathological youth gamers.

Hospital Equipment Unaffected By Cell Phone Use


In a study published in the March issue of Mayo Clinic Proceedings, researchers say normal use of cell phones results in no noticeable interference with patient care equipment. Three hundred tests were performed over a five-month period in 2006, without a single problem incurred.

Involved in the study were two cellular phones which used different technologies from different carriers and 192 medical devices. Tests were performed at Mayo Clinic campus in Rochester.

The study's authors say the findings should prompt hospitals to alter or abandon their bans on cell phone use. Mayo Clinic leaders are reviewing the facility's cell phone ban because of the study's findings, says David Hayes, M.D., of the Division of Cardiovascular Diseases and a study author.

Cell phone bans inconvenience patients and their families who must exit hospitals to place calls, the study's authors say.

The latest study revisits two earlier studies that were done 'in vitro' (i.e., the equipment wasn't connected to the patients), which also found minimal interaction from cell phones used in health care facilities. Dr. Hayes says the latest study bolsters the notion that cells phones are safe to use in hospitals.

Other Technology-Related Proceedings Articles Explore Concerns for Patients

Two other pieces in the March issue of Mayo Clinic Proceedings also address whether technological devices interfere with patient care equipment. Unlike the cellular phone study, the other reports detail technological devices that caused patient care equipment to malfunction.

A letter to the editor published in the journal details the first known case of a portable CD player causing an abnormal electrocardiographic (ECG) recording within a hospital setting. The recording returned to normal when the CD player, which the patient was holding close to the ECG lead, was turned off.

Technology also can threaten implantable rhythm devices such as pacemakers and defibrillators outside the hospital setting, according to a journal report. The report outlines two cases of retail stores' anti-theft devices causing people's heart devices to malfunction.

The anti-theft devices are commonly placed near store exits and entrances, triggering an alarm if customers leave with merchandise that was not purchased. In two instances in Tennessee, customers with a pacemaker and an implantable cardiac defibrillator experienced adverse reactions after nearing anti-theft devices.

The devices triggered the adverse reactions, sending both patients to emergency rooms for evaluation. The report's authors recommend that the anti-theft devices be placed in areas of stores where customers won't linger -- away from vending machines or displays of sale merchandise, for instance -- to help avoid future episodes.

Store employees also should be trained to move a customer who has collapsed near an anti-theft device when medically advisable, says J. Rod Gimbel, M.D, of East Tennessee Heart Consultants, and an author of the report. If they aren't moved, they could experience recurring life-threatening malfunction to their implantable device, as did one patient who was described in the report.

"Simply moving the person away from the anti-theft device may save their life," Dr. Gimbel says.

Though Gimbel's report outlines only two cases of anti-theft devices causing implantable heart devices to malfunction, he asserts that similar instances are likely underreported, qualifying the problem as a potentially widespread public safety issue.

"Many times with public safety issues we wait until something bad occurs before we act," Dr. Gimbel says. "Here's an opportunity where we can make our knowledge public and head off future problems."

In an accompanying editorial, John Abenstein, M.D., of Mayo Clinic's Department of Anesthesiology, addresses the journal reports relating to the impact of technological devices on patient care equipment.

Dr. Abenstein says the risk of some technological devices upsetting the function of patient care equipment in hospitals appears to be small. The Food and Drug Administration(FDA) should take a more explicit stand on the matter, he says, so that health care facility policies can be altered when appropriate.

Other authors of the cell phone study are Jeffrey Tri, Rodney Severson, and Linda Hyberger, all of Mayo Clinic Rochester. The other author of the anti-theft device report is James Cox Jr., M.D., of the University of Tennessee Medical Center-Knoxville.

http://www.mayoclinic.org/news/

A Blueprint For 'Smart' Health Care


What people have come to expect in cell phones and personal communicators may soon become common in health-care devices and products at home and in medical offices, thanks to new technology announced recently by the University of Florida and IBM.

The technology creates the first-ever roadmap for widespread commercial development of "smart" devices that, for example, take a person's blood pressure, temperature or respiration rate the minute a person steps into his or her house -- then transmit it immediately and automatically to doctors or family.

That could eliminate the need for many doctor's visits, which are often difficult for the elderly or sick. By enabling regular updates via text message or e-mail, the technology also could pave the way for people to share real-time information on their health or well-being with absent loved ones. And it could prove useful for doctors who need to keep tabs on many patients at one time by helping the doctors to prioritize whom to treat first.

"We call it quality-of-life engineering," said Sumi Helal, professor of computer engineering and the project's lead UF researcher. "It's really a change of mindset."

The idea of using technology to provide medical care at a distance is nothing new. Doctors have relied on "telemedicine" to communicate with specialists for years. More recently, telemedicine has been expanded to include, for example, surgeons performing robotic procedures on distant patients.

But the UF-IBM advance goes a step further: It provides the technological "stepstones" to make it easy for any company to manufacture and sell smart networked devices -- while also making them more user-friendly for consumers.

"UF and IBM both see the need and the opportunity to integrate the physical world of sensors and other devices directly into enterprise systems," said Richard Bakalar, Chief Medical Officer for IBM. "Doing so in an open environment will remove market inhibitors that impede innovation in critical industries like health care and open a broader device market that's fueled by uninterrupted networking."

Helal has devoted the past several years to developing smart devices for the elderly in a model home known as the "Gator Tech Smart Home" in Gainesville.

He and his students pioneered the "Smart Wave" microwave oven that can automatically determine how much time to cook a frozen meal or keep track of how much salt it contains. Among other devices, they also created an instrument that records how many steps a person takes, information that can tell absent caregivers how active its occupants are.

But these and other devices currently have a major shortcoming: They require "a team of engineers" to install them, Helal said. In a world where consumers are accustomed to electronics that require no more than a power outlet, that dramatically limits their appeal. "We decided to create a technology that self integrates," Helal said. "When you bring it in to the house and plug it in, it automatically provides its service and finds a path to the outside world."

With $60,000 in research funding from IBM, Helal designed "middleware," or software and hardware that glues together different systems, that can give his and any similar health-aid devices this independence and connectivity. Importantly, the software is based on open standards, or publicly available specifications useable by anyone, such as those now being made available by consortiums of technology companies including Eclipse, W3C and OSGi.

Open standards make it easy for product developers to tap the technology in any new smart assistive devices, Helal said. That, in turn, will make the devices more common.

The hardware component of the system is an inexpensive sensor platform about half the size of a business card. Developed at UF and licensed to Pervasa, a Gainesville-based UF spinoff company headed by Helal, the "Atlas" platform makes it easy to create a network of sensors and make their information available on a computer network.

The advance is crucial given the increasing number of elderly Americans. The number of people 85 and over is expected to rise from 4.2 million in 2000 to 6.1 million in 2010 and 9.6 million by 2030, according to federal government statistics. Meanwhile, the percentage of older Americans living alone will either remain high or continue to grow: About half of women and nearly a quarter of men aged 75 and older currently live alone.

But the UF-IBM technology may also prove useful in many other medical settings. For example, Helal said, it could help emergency rooms operate more safely. Rather than a standard waiting list, patients could be equipped with networked wireless monitors of their vital signs, allowing doctors to determine who in a waiting room needs the most immediate care.


http://www.ufl.edu/

Software Wrapper For Smarter, Networked Homes


Homes today are filled with increasing numbers of high-tech gadgets, from smart phones and PCs to state-of-the-art TV and audio systems, many of them with built-in networking capabilities. Combined, these devices could form the building blocks of the smart homes of the future, but only if they can be made to work together intelligently.


Although the idea of creating intelligent networked home environments as a way to make life easier, safer and more enjoyable has been around for some time, the technology has yet to catch up with the vision. Home automation systems have become more commonplace and consumer electronics have more networking capability, but no one has, so far, gotten all the high-tech and not so high-tech gadgetry cluttering modern homes to work together in an intelligent way. It is not yet common for fridges to talk to your TV to warn that the door has been left open or for heating systems to turn on when you return home, for example.

“People are finding themselves with all these networkable devices and are wondering where the applications are that can use these devices to make life easier and how they could be of more value together than individually,” says Maddy Janse, a researcher for Dutch consumer electronics group Philips.

There are two fundamental obstacles to realising the vision of the intelligent networked home: lack of interoperability between individual devices and the need for context-aware artificial intelligence to manage them. And, to make smart homes a reality, the two issues must be addressed together.

Software wrapper to get gadgets talking

The EU-funded Amigo project, coordinated by Janse, is doing just that, creating a middleware software platform that will get all networkable devices in the home talking to each other and providing an artificial intelligence layer to control them.

“With the Amigo system, you can take any networkable device, create a software wrapper for it and dynamically integrate it into the networked home environment,” Janse explains.

The project, which involves several big industrial and research partners, is unique in that it is addressing the issues of interoperability and intelligence together and, most significantly, its software is modular and open source.

By steering away from creating a monolithic system and making the software accessible to all, the partners believe they can overcome the complications that have held back other smart home projects. For consumer electronics companies and telecoms firms, the system has the additional benefit of providing a test bed for new products and services.

“What we are trying to do is so large and so complex that it has to be broken down into smaller parts. By making it open source and letting third-party developers create applications we can ensure the system addresses whatever challenges arise,” Janse says.

The Amigo architecture consists of a base middleware layer, an intelligent user services layer, and a programming and deployment framework that developers can use to create individual applications and services. These individual software modules form the building blocks of the networked home environment, which has the flexibility to grow as and when new devices and applications are added.

Interoperability is ensured through support for and abstraction of common interaction and home automation standards and protocols, such as UPnP and DNLA as well as web services, while the definition of appropriate ontologies enables common understanding at a semantic level.

“A lot of applications are already available today and more will be created as more developers start to use the software,” Janse says.

Vision of the future

A video created by the project partners underscores their vision for the future in which homes adapt to the behaviour of occupants, automatically setting ambient lighting for watching a movie, locking the doors when someone leaves or contacting relatives or emergency services if someone is ill or has an accident. In an extended home environment, the homes of friends and relatives are interconnected, allowing information and experiences to be shared more easily and setting the stage for the use of tele-presence applications to communicate and interact socially.

Initially, Janse sees such networked systems being employed in larger scale environments than an individual home or for specific purposes. Some subsets of applications could be rolled out in hotels or hospitals or used to monitor the wellbeing of the elderly or infirm, for example.

“With the exception of people with a lot of money building their homes from scratch, it will be a while before intelligent networked homes become commonplace,” the coordinator notes. “In addition, this isn’t something average consumers can easily set up themselves, currently some degree of programming knowledge is needed and installers need to become familiar with the concepts and their potential.”

Even so, the project is hoping to continue to stimulate the growth of the sector.

In October, it launched the Amigo Challenge, a competition in which third-party programmers have been invited to come up with new applications using the Amigo software. Janse expects the initiative will lead to the software being used in even more innovative and possibly unexpected ways.


http://cordis.europa.eu/ictresults/index.cfm?section=home&tpl=home

Simple And Secure Networked Home


Most people will only start to control equipment remotely in their homes when they believe it is simple and safe to do so. A newly developed control system provides personalised answers.

Software that enables people to control the audiovisual equipment and white goods in their home through one simple, remote interface has been demonstrated by researchers on the ESTIA project.

New networked devices are automatically recognised by the system, and the network can be administered using a wide range of devices readily found in the home, including TVs, cordless phones, handheld PDAs, or from a PC.

Increasingly, multimedia equipment and even ovens, washing machines and tumble driers in our homes can be controlled remotely. While we see the benefits, few of us are firing up the oven from work so dinner is cooked when we arrive home. Why?

There are two main reasons we are reluctant to tap into home networks, according to Professor Lars Dittmann, a lead researcher in the EU-funded ESTIA project which studied what is needed in an enhanced networked environment for personalised AV content and appliances.

Firstly, he says, people perceive the control of networked devices as too complicated – particularly as the thousands of ‘networkable’ devices available for the home tend to have their own proprietary control systems. There is also a trust issue. Parents, for instance, worry that if it is possible to turn the oven on over the internet, their children will learn how to do it with potentially catastrophic consequences.

ESTIA’s solution

The ESTIA team sought to address both these issues by producing a single, simple and easy-to-use interface for all networked devices, and by giving each network user a personal identity with different access rights.

“For example, it would allow people entering the house to type in a four digit pin code on a pad by the door,” says Dittmann. “If there was an adult in the house, the children would be able to use the oven or microwave, but they couldn’t if they were home alone. Similarly, it is a way to control or block content on the TV.”

As well as the residential gateway software, for control of the home network via an internet connection, the team also developed a Home Media Gateway – a set-top box using Windows Vista, that allows a higher level of administration and control.

The ESTIA home networking architecture selects and uses whatever networking technologies are available – from IP-based networks to KNX. KNX, or ‘European Installation Bus’ as it has been known, is a wire-based platform for building control systems. Based on this physical infrastructure, ESTIA defined a set of higher-layer interfaces for machine-to-machine and person-to-machine interoperability.

Road to new standards

“We don’t believe that we have set the ultimate standard here,” says Dittmann, “but we believe we have moved the debate ahead by demonstrating that network control systems don’t have to be too complicated. It is simple for anyone who can use TV text to set up a device and an administrator, after connecting a device, can decide that it should only be visible or controllable by certain people.”

Having all devices on a single network and sharing one interface adds considerable flexibility and enables home users to personalise the services they use. For example, when a meal is ready in the oven an alert could pop up on the television screen in the living room.

Some of the participants are incorporating elements from ESTIA into their next-generation products. Keletron is introducing ESTIA’s audiovisual handling core logic in its product portfolio and presenting this to potential customers considering gateway installed services.

All of the companies that participated in the ESTIA project, including Siemens and the Slovenian white goods group Gorenje, have gained a lot of experience in how to exploit the commercial potential of a personalised home-networking control system, according to Dittmann. Moving forward from that point will require consensus.

“We demonstrated that devices could be automatically recognised by the network. To move forward requires the manufacturers of home-network-enabled devices to agree on a number of standards,” he concludes.


http://cordis.europa.eu./ictresults/

Ensuring Universal Access In Digital Homes Makes For An Easier Life


The EUREKA ITEA software Cluster ANSO project makes possible the seamless integration of domestic networked multimedia, home control and communications devices, providing universal access to computing and entertainment services. As a result, intelligent sensors, actuators, wireless networks and terminal devices will blend into our daily living environments.

More citizens will gain access to digital services and have a much greater choice when mixing services and appliances to suit specific needs. This will dramatically accelerate development of new networked multimedia services and content as well as their use in building innovative applications to boost smart digital home services in Europe.

“The main problem now lies in the overabundant and overwhelming variety of incompatible standards and technologies in home automation, communications and multimedia systems,” explains project leader Tommi Aihkisalo of Finnish research institute VTT. “There are dozens of home control and automation networks and protocols available and even more in the field of multimedia and communications. All these technologies are competing with each other and are incompatible – lacking intercommunications abilities.”

The project outline was drafted by a Greek partner unable to participate due to lack of funding. The lead was taken over by VTT. An enthusiastic and highly competent consortium of industrial and academic partners from three countries then carried out the project. Their experience ranged from development of networking equipment to service provisioning, software/protocol development and network operations.

Homogeneous access

To ensure it met real world needs, ANSO studied and evaluated market and end-user needs through a public survey and interviews with technical experts. It quickly realised the variety of standards would have to continue to coexist. That is why the main contribution of the project has been a unified middleware solution for interoperability – middleware enables incompatible hardware and software systems to communicate and interact.

All the novel applications enabled by the developed platform and related technologies combine home automation, telecommunications and networked multimedia. The main benefit is the interoperability of these different technology domains, with the provision of homogeneous access. Major applications include: home gateways allowing home-automation applications such as security, remote control and management; assisted living for disabled or elderly people; home networking; communications applications; and multimedia applications and devices such as video-on-demand, set-top boxes and context-aware Internet applications.

Home robot

“Many of the applications developed are geared towards enabling an aging population to stay healthy and active for longer in their own homes,” adds Aihkisalo. A particularly interesting application investigated was an automatic home-assistance system involving a robotic companion for disabled or elderly occupants.

Synthetic Autonomous Majorduomo (SAM) is a companion robot designed for assistance and service functions. It is composed of a mobile platform on top of which is mounted a manipulator arm. A laser range-finder sensor provides autonomous navigation and security functionalities. The arm holds a gripper for object manipulation; low-cost cameras are set on the gripper to give video feedback to the operator. These cameras also used to enable a visual grasping function.

The robot companion is able to interact with the home environment using the middleware developed in ANSO; it controls and communicates with the environment to help it in its tasks – such as controlling home lights to improve illumination for its imaging systems. Using patented technologies, the user is easily able to designate what he/she wants the robot to fetch just by clicking the object in the image.


http://www.alphagalileo.org/