Monthly Archives: April 2017

Artificial intelligence shows potential to fight

In a study published online in Ophthalmology, the journal of the American Academy of Ophthalmology, the researchers describe how they used deep-learning methods to create an automated algorithm to detect diabetic retinopathy. Diabetic retinopathy (DR) is a condition that damages the blood vessels at the back of the eye, potentially causing blindness.

“What we showed is that an artificial intelligence-based grading algorithm can be used to identify, with high reliability, which patients should be referred to an ophthalmologist for further evaluation and treatment,” said Theodore Leng, M.D., lead author. “If properly implemented on a worldwide basis, this algorithm has the potential to reduce the workload on doctors and increase the efficiency of limited healthcare resources. We hope that this technology will have the greatest impact in parts of the world where ophthalmologists are in short supply.”

Another advantage is that the algorithm does not require any specialized, inaccessible, or costly computer equipment to grade images. It can be run on a common personal computer or smartphone with average processors.

Deep learning is on the rise in computer science and medicine because it can teach computers to do what our brains do naturally. What Dr. Leng and his colleagues did was to create an algorithm based on more than 75,000 images from a wide range of patients representing several ethnicities, and then used it to teach a computer to identify between healthy patients and those with any stage of disease, from mild to severe.

Dr. Leng’s algorithm could identify all disease stages, from mild to severe, with an accuracy rate of 94 percent. It would be these patients that should see an ophthalmologist for further examination. An ophthalmologist is a physician who specializes in the medical and surgical treatment of eye diseases and conditions.

Diabetes affects more than 415 million people worldwide or 1 in every 11 adults. About 45 percent of diabetic patients are likely to have diabetic retinopathy at some point in their life; however, fewer than half of patients are aware of their condition. Early detection and treatment are integral to combating this worldwide epidemic of preventable vision loss.

Ophthalmologists typically diagnose the presence and severity of diabetic retinopathy by direct examination of the back of the eye and by evaluation of color photographs of the fundus, the interior lining of the eye. Given the large number of diabetes patients globally, this process is expensive and time-consuming. Also, previous studies have shown that detection is somewhat subjective, even among trained specialists. This is why an effective, automated algorithm could potentially reduce the rate of worldwide blindness.

Positioning quantum bits in diamond optical circuits

But practical, diamond-based quantum computing devices will require the ability to position those defects at precise locations in complex diamond structures, where the defects can function as qubits, the basic units of information in quantum computing. In Nature Communications, a team of researchers from MIT, Harvard University, and Sandia National Laboratories reports a new technique for creating targeted defects, which is simpler and more precise than its predecessors.

In experiments, the defects produced by the technique were, on average, within 50 nanometers of their ideal locations.

“The dream scenario in quantum information processing is to make an optical circuit to shuttle photonic qubits and then position a quantum memory wherever you need it,” says Dirk Englund, an associate professor of electrical engineering and computer science who led the MIT team. “We’re almost there with this. These emitters are almost perfect.”

The new paper has 15 co-authors. Seven are from MIT, including Englund and first author Tim Schröder, who was a postdoc in Englund’s lab when the work was done and is now an assistant professor at the University of Copenhagen’s Niels Bohr Institute. Edward Bielejec led the Sandia team, and physics professor Mikhail Lukin led the Harvard team.

Appealing defects

Quantum computers, which are still largely hypothetical, exploit the phenomenon of quantum “superposition,” or the counterintuitive ability of small particles to inhabit contradictory physical states at the same time. An electron, for instance, can be said to be in more than one location simultaneously, or to have both of two opposed magnetic orientations.

Where a bit in a conventional computer can represent zero or one, a “qubit,” or quantum bit, can represent zero, one, or both at the same time. It’s the ability of strings of qubits to, in some sense, simultaneously explore multiple solutions to a problem that promises computational speedups.

Diamond-defect qubits result from the combination of “vacancies,” which are locations in the diamond’s crystal lattice where there should be a carbon atom but there isn’t one, and “dopants,” which are atoms of materials other than carbon that have found their way into the lattice. Together, the dopant and the vacancy create a dopant-vacancy “center,” which has free electrons associated with it. The electrons’ magnetic orientation, or “spin,” which can be in superposition, constitutes the qubit.

A perennial problem in the design of quantum computers is how to read information out of qubits. Diamond defects present a simple solution, because they are natural light emitters. In fact, the light particles emitted by diamond defects can preserve the superposition of the qubits, so they could move quantum information between quantum computing devices.

Silicon switch

The most-studied diamond defect is the nitrogen-vacancy center, which can maintain superposition longer than any other candidate qubit. But it emits light in a relatively broad spectrum of frequencies, which can lead to inaccuracies in the measurements on which quantum computing relies.

In their new paper, the MIT, Harvard, and Sandia researchers instead use silicon-vacancy centers, which emit light in a very narrow band of frequencies. They don’t naturally maintain superposition as well, but theory suggests that cooling them down to temperatures in the millikelvin range — fractions of a degree above absolute zero — could solve that problem. (Nitrogen-vacancy-center qubits require cooling to a relatively balmy 4 kelvins.)

To be readable, however, the signals from light-emitting qubits have to be amplified, and it has to be possible to direct them and recombine them to perform computations. That’s why the ability to precisely locate defects is important: It’s easier to etch optical circuits into a diamond and then insert the defects in the right places than to create defects at random and then try to construct optical circuits around them.

In the process described in the new paper, the MIT and Harvard researchers first planed a synthetic diamond down until it was only 200 nanometers thick. Then they etched optical cavities into the diamond’s surface. These increase the brightness of the light emitted by the defects (while shortening the emission times).

Then they sent the diamond to the Sandia team, who have customized a commercial device called the Nano-Implanter to eject streams of silicon ions. The Sandia researchers fired 20 to 30 silicon ions into each of the optical cavities in the diamond and sent it back to Cambridge.

Practicing brain surgery

A report on the simulator that guides trainees through an endoscopic third ventriculostomy (ETV) was published in the Journal of Neurosurgery: Pediatrics on April 25. The procedure uses endoscopes, which are small, computer-guided tubes and instruments, to treat certain forms of hydrocephalus, a condition marked by an excessive accumulation of cerebrospinal fluid and pressure on the brain. ETV is a minimally invasive procedure that short-circuits the fluid back into normal channels in the brain, eliminating the need for implantation of a shunt, a lifelong device with the associated complications of a foreign body.

“For surgeons, the ability to practice a procedure is essential for accurate and safe performance of the procedure. Surgical simulation is akin to a golfer taking a practice swing,” says Alan R. Cohen, M.D., professor of neurosurgery at the Johns Hopkins University School of Medicine and a senior author of the report. “With surgical simulation, we can practice the operation before performing it live.”

While cadavers are the traditional choice for such surgical training, Cohen says they are scarce, expensive, nonreusable, and most importantly, unable to precisely simulate the experience of operating on the problem at hand, which Cohen says requires a special type of hand-eye coordination he dubs “Nintendo Neurosurgery.”

In an effort to create a more reliable, realistic and cost-effective way for surgeons to practice ETV, the research team worked with 3D printing and special effects professionals to create a lifelike, anatomically correct, full-size head and brain with the touch and feel of human skull and brain tissue.

The fusion of 3D printing and special effects resulted in a full-scale reproduction of a 14-year-old child’s head, modeled after a real patient with hydrocephalus, one of the most common problems seen in the field of pediatric neurosurgery. Special features include an electronic pump to reproduce flowing cerebrospinal fluid and brain pulsations. One version of the simulator is so realistic that it has facial features, hair, eyelashes and eyebrows.

To test the model, Cohen and his team randomly paired four neurosurgery fellows and 13 medical residents to perform ETV on either the ultra-realistic simulator or a lower-resolution simulator, which had no hair, lashes or brows.

After completing the simulation, fellows and residents each rated the simulator using a five-point scale. On average, both the surgical fellows and the residents rated the simulator more highly (4.88 out of 5) on its effectiveness for ETV training than on its aesthetic features (4.69). The procedures performed by the trainees were also recorded and later watched and graded by two fully trained neurosurgeons in a way that they could not identify who the trainees were or at what stage they were in their training.

The neurosurgeons assessed the trainees’ performance using criteria such as “flow of operation,” “instrument handling” and “time and motion.”

Neurosurgeons consistently rated the fellows higher than residents on all criteria measured, which accurately reflected their advanced training and knowledge, and demonstrated the simulator’s ability to distinguish between novice and expert surgeons.

New capacity for electronics

That technology is still science fiction, but a new study may bring it closer to reality. A team of researchers from Japan reports in Applied Physics Letters, from AIP Publishing, that they have discovered a phenomenon called the photodielectric effect, which could lead to laser-controlled touch displays.

A number of basic circuit components have been developed beyond their traditional electricity-based designs to instead be controlled with light, such as photo-resistors, photodiodes, and phototransistors. However, there isn’t yet a photo-capacitor.

“A photo-capacitor provides a novel way for operating electronic devices with light,” said Hiroki Taniguchi of the University of Nagoya in Japan. “It will push the evolution of electronics to next-generation photo-electronics.”

Capacitors are basic components for all kinds of electronics, acting somewhat like buckets for electrons that can, for example, store energy or filter unwanted frequencies. Most simply, a capacitor consists of two parallel conducting plates separated by an electrically insulating material, called a dielectric, such as air or glass. Applying a voltage across the plates causes opposing (and equal) charges to build up on both plates.

The dielectric’s properties play a determinate role in the electric field profile between the plates and, in turn, how much energy the capacitor can store. By using light to increase a property of the dielectric called permittivity, Taniguchi and his colleagues hope to create light-controlled capacitors.

Previous researchers have achieved a type of photo-dielectric effect using a variety of materials, but relied on photo-conductance, where light increased the materials electrical conductivity. The rise in conductance, it turns out, leads to greater dielectric permittivity.

But this type of extrinsic photodielectric effect isn’t suitable for practical applications, Taniguchi said. A capacitor must be a good insulator, preventing electrical current from flowing. But under the extrinsic photodielectric effect, a capacitor’s insulating properties deteriorate. In addition, such a capacitor would only work with low-frequency alternating current.

Now Taniguchi and his colleagues have found an intrinsic photodielectric effect in a ceramic with the composition LaAl9.9Zn0.01O3-δ. “We have demonstrated the existence of the photodielectric effect experimentally,” he said.

In their experiments, they shined a light-emitting diode (LED) onto the ceramic and measured its dielectric permittivity, which increased even at high frequencies. But unlike prior experiments that used the extrinsic photodielectric effect, the material remained a good insulator.

The lack of a significant loss means the LED is directly altering the dielectric permittivity of the material, and, in particular, is not increasing conductance, as is the case with the extrinsic effect. It’s still unclear how the intrinsic photodielectric effect works, Taniguchi said, but it may have to do with defects in the material.

Light excites electrons into higher (quantized) energy states, but the quantum states of defects are confined to smaller regions, which may be preventing these photo-excited electrons from traveling far enough to generate an electric current. The hypothesis being that the electrons remain trapped which leads to more electrical insulation of the dielectric material.

Data analysis support public health decision making

“In a real-world outbreak, the time is often too short and the data too limited to build a really accurate model to map disease progression or guide public-health decisions,” said Ashlynn R. Daughton, a graduate research assistant at Los Alamos and doctoral student at University of Colorado, Boulder. She is lead author on a paper out last week in Scientific Reports, a Nature journal. “Our aim is to use existing models with low computational requirements first to explore disease-control measures and second to develop a platform for public-health collaborators to use and provide feedback on models,” she said.

The research draws on Los Alamos’ expertise in computational modeling and health sciences and contributes to the Laboratory’s national security mission by protecting against biological threats. Infectious diseases are a leading cause of death globally. Decisions surrounding how to control an infectious disease outbreak currently rely on a highly subjective process that involves both surveillance and expert opinion. Epidemiological modeling can fill gaps in the decision-making process, she says, by using available data to provide quantitative estimates of outbreak trajectories — determining where the infection is going, and how fast, so medical supplies and staff can be deployed for maximum effect. But if the tool requires unavailable data or overwhelms the capabilities of the health system, it won’t be operationally useful.

Collaboration between the modeling community and public-health policy community enables effective deployment, but the modeling resources need to be connected more strongly with the health community. Such collaboration is rare, as Daughton describes it, resulting in a scarcity of models that truly meet the needs of the public-health community.

Simple, traditional models group people into categories based on their disease status (for example, SIR for Susceptible, Infected or Recovered). “For this initial work, we use a SIR model, modified to include a control measure, to explore many possible disease progression paths. The SIR model was chosen because it is the simplest and requires minimal computational resources,” the paper notes.

Other models are called agent-based, meaning they identify agents, often akin to an “individual,” and map each agent’s potential interactions during a day (for example, an individual might go to school, go to work, and interact with other members of the household). The model then extrapolates how each interaction could spread the disease. Because these are high-resolution models requiring significant expertise and computing power, as well as large quantities of data, they require resources beyond the reach of an average health department.

For this study, using the simpler SIR model, the team explored outbreaks of measles, norovirus and influenza to show the feasibility of its use and describe a research agenda to further promote interactions between decision makers and the modeling community.

“Unlike standard epidemiological models that are disease and location specific and not transferrable or generalizable, this model is disease and location agnostic and can be used at a much higher level for planning purposes regardless of the specific control measure,” said Alina Deshpande, group leader of the Biosecurity and Public Health group at Los Alamos and principal investigator on the project.

Overall, the team determined, there is a clear need in the field to better understand outbreak parameters, underlying model assumptions, and the ways that these apply to real-world scenarios.

Help extend Moores Law

In the world of semiconductor physics, the goal is to devise more efficient and microscopic ways to control and keep track of 0 and 1, the binary codes that all information storage and logic functions in computers are based on.

A new field of physics seeking such advancements is called valleytronics, which exploits the electron’s “valley degree of freedom” for data storage and logic applications. Simply put, valleys are maxima and minima of electron energies in a crystalline solid. A method to control electrons in different valleys could yield new, super-efficient computer chips.

A University at Buffalo team, led by Hao Zeng, PhD, professor in the Department of Physics, worked with scientists around the world to discover a new way to split the energy levels between the valleys in a two-dimensional semiconductor.

The work is described in a study published online today (May 1, 2017) in the journal Nature Nanotechnology.

The key to Zeng’s discovery is the use of a ferromagnetic compound to pull the valleys apart and keep them at different energy levels. This leads to an increase in the separation of valley energies by a factor of 10 more than the one obtained by applying an external magnetic field.

“Normally there are two valleys in these atomically thin semiconductors with exactly the same energy. These are called ‘degenerate energy levels’ in quantum mechanics terms. This limits our ability to control individual valleys. An external magnetic field can be used to break this degeneracy. However, the splitting is so small that you would have to go to the National High Magnetic Field Laboratories to measure a sizable energy difference. Our new approach makes the valleys more accessible and easier to control, and this could allow valleys to be useful for future information storage and processing,” Zeng said.

The simplest way to understand how valleys could be used in processing data may be to think of two valleys side by side. When one valley is occupied by electrons, the switch is “on.” When the other valley is occupied, the switch is “off.” Zeng’s work shows that the valleys can be positioned in such a way that a device can be turned “on” and “off,” with a tiny amount of electricity.

Microscopic ingredients

Zeng and his colleagues created a two-layered heterostructure, with a 10 nanometer thick film of magnetic EuS (europium sulfide) on the bottom and a single layer (less than 1 nanometer) of the transition metal dichalcogenide WSe2 (tungsten diselenide) on top. The magnetic field of the bottom layer forced the energy separation of the valleys in the WSe2.

Previous attempts to separate the valleys involved the application of very large magnetic fields from outside. Zeng’s experiment is believed to be the first time a ferromagnetic material has been used in conjunction with an atomically thin semiconductor material to split its valley energy levels.