Category Archives: Computer

Heart disease designed by new digital instrument

“Exercise reduces cardiovascular risk, improves body composition and physical fitness, and lowers mortality and morbidity,” said lead author Professor Dominique Hansen, associate professor in exercise physiology and rehabilitation of internal diseases at Hasselt University, Diepenbeek, Belgium. “But surveys have shown that many clinicians experience great difficulties in prescribing specific exercise programmes for patients with multiple cardiovascular diseases and risk factors.”

The European Association of Preventive Cardiology Exercise Prescription in Everyday Practice and Rehabilitative Training (EXPERT) tool generates exercise prescriptions for patients with different combinations of cardiovascular risk factors or cardiovascular diseases. The tool was designed by cardiovascular rehabilitation specialists from 11 European countries, in close collaboration with computer scientists from Hasselt University.

EXPERT can be installed on a laptop or personal computer (PC). During a consultation, the clinician inputs the patient’s characteristics and cardiovascular risk factors, cardiovascular diseases and other chronic conditions, medications, adverse events during exercise testing, and physical fitness (from a cardiopulmonary exercise test).

The tool automatically designs a personalized exercise programme for the patient. It includes the ideal exercise type, intensity, frequency, and duration of each session. Safety precautions are also given for patients with certain conditions. The advice can be printed out and given to the patient to carry out at home, and reviewed by the clinician in a few months.

Professor Hansen said: “EXPERT generates an exercise prescription and safety precautions since certain patients are not allowed to do certain exercises. For example a diabetic patient with retinopathy should not do high-intensity exercise.”

“This tool is the first of its kind,” said Professor Hansen. “It integrates all the international recommendations on exercise to calculate the optimum training programme for an individual patient. It really is personalized medicine.”

There are different exercise goals for each cardiovascular risk factor and cardiovascular disease. In a patient who has diabetes, is overweight and has hypertension the three goals are to reduce blood glucose, fat mass, and blood pressure. The tool takes all three goals into account.

Professor Hansen said: “EXPERT provides the exercise prescription a patient needs to meet their particular exercise goals, which should ultimately help them to feel better and reduce their risks of morbidity and mortality. By prescribing an exercise programme that really works patients are more likely to be motivated to continue because they see that it is improving their health.”

The next step is to test the impact of EXPERT on patient outcomes in a clinical trial. Professor Hansen said: “Our hypothesis is that clinicians using the tool will prescribe exercise interventions with much greater clinical benefits. That will lead to greater reductions in body weight, blood pressure, blood glucose, and lipids, and improvements in physical fitness, with a positive impact on morbidity and mortality.”

A study comparing acceptance rates

“There are a number of questions and concerns related to gender bias in computer programming, but this project was focused on one specific research question: To what extent does gender bias exist when pull requests are judged on GitHub?” says Emerson Murphy-Hill, corresponding author of a paper on the study and an associate professor of computer science at North Carolina State University.

GitHub is an online programming community that fosters collaboration on open-source software projects. When people identify ways to improve code on a given project, they submit a “pull request.” Those pull requests are then approved or denied by “insiders,” the programmers who are responsible for overseeing the project.

For this study, researchers looked at more than 3 million pull requests from approximately 330,000 GitHub users, of whom about 21,000 were women.

The researchers found that 78.7 percent of women’s pull requests were accepted, compared to 74.6 percent for men.

However, when looking at pull requests by people who were not insiders on the relevant project, the results got more complicated.

Programmers who could easily be identified as women based on their names or profile pictures had lower pull request acceptance rates (58 percent) than users who could be identified as men (61 percent). But woman programmers who had gender neutral profiles had higher acceptance rates (70 percent) than any other group, including men with gender neutral profiles (65 percent).

“Our results indicate that gender bias does exist in open-source programming,” Murphy-Hill says. “The study also tells us that, in general, women on GitHub are strong programmers. We don’t think that’s because gender affects one’s programming skills, but likely stems from strong self-selection among women who submit pull requests on the site.

“We also want to note that this paper builds on a previous, un-peer-reviewed version of the paper, which garnered a lot of input that improved the research,” Murphy-Hill says.

The paper, “Gender Differences and Bias in Open Source: Pull Request Acceptance of Women Versus Men,” is published in the open-access journal PeerJ Computer Science. The paper was co-authored by Josh Terrell, a former undergraduate at Cal Poly; Andrew Kofink, a former undergraduate at NC State; Justin Middleton, a Ph.D. student at NC State; Clarissa Rainear, an undergraduate at NC State; Chris Parnin, an assistant professor of computer science at NC State; and Jon Stallings, an assistant professor of statistics at NC State. The work was done with support from the National Science Foundation under grant number 1252995.

Artificial intelligence shows potential to fight

In a study published online in Ophthalmology, the journal of the American Academy of Ophthalmology, the researchers describe how they used deep-learning methods to create an automated algorithm to detect diabetic retinopathy. Diabetic retinopathy (DR) is a condition that damages the blood vessels at the back of the eye, potentially causing blindness.

“What we showed is that an artificial intelligence-based grading algorithm can be used to identify, with high reliability, which patients should be referred to an ophthalmologist for further evaluation and treatment,” said Theodore Leng, M.D., lead author. “If properly implemented on a worldwide basis, this algorithm has the potential to reduce the workload on doctors and increase the efficiency of limited healthcare resources. We hope that this technology will have the greatest impact in parts of the world where ophthalmologists are in short supply.”

Another advantage is that the algorithm does not require any specialized, inaccessible, or costly computer equipment to grade images. It can be run on a common personal computer or smartphone with average processors.

Deep learning is on the rise in computer science and medicine because it can teach computers to do what our brains do naturally. What Dr. Leng and his colleagues did was to create an algorithm based on more than 75,000 images from a wide range of patients representing several ethnicities, and then used it to teach a computer to identify between healthy patients and those with any stage of disease, from mild to severe.

Dr. Leng’s algorithm could identify all disease stages, from mild to severe, with an accuracy rate of 94 percent. It would be these patients that should see an ophthalmologist for further examination. An ophthalmologist is a physician who specializes in the medical and surgical treatment of eye diseases and conditions.

Diabetes affects more than 415 million people worldwide or 1 in every 11 adults. About 45 percent of diabetic patients are likely to have diabetic retinopathy at some point in their life; however, fewer than half of patients are aware of their condition. Early detection and treatment are integral to combating this worldwide epidemic of preventable vision loss.

Ophthalmologists typically diagnose the presence and severity of diabetic retinopathy by direct examination of the back of the eye and by evaluation of color photographs of the fundus, the interior lining of the eye. Given the large number of diabetes patients globally, this process is expensive and time-consuming. Also, previous studies have shown that detection is somewhat subjective, even among trained specialists. This is why an effective, automated algorithm could potentially reduce the rate of worldwide blindness.

Positioning quantum bits in diamond optical circuits

But practical, diamond-based quantum computing devices will require the ability to position those defects at precise locations in complex diamond structures, where the defects can function as qubits, the basic units of information in quantum computing. In Nature Communications, a team of researchers from MIT, Harvard University, and Sandia National Laboratories reports a new technique for creating targeted defects, which is simpler and more precise than its predecessors.

In experiments, the defects produced by the technique were, on average, within 50 nanometers of their ideal locations.

“The dream scenario in quantum information processing is to make an optical circuit to shuttle photonic qubits and then position a quantum memory wherever you need it,” says Dirk Englund, an associate professor of electrical engineering and computer science who led the MIT team. “We’re almost there with this. These emitters are almost perfect.”

The new paper has 15 co-authors. Seven are from MIT, including Englund and first author Tim Schröder, who was a postdoc in Englund’s lab when the work was done and is now an assistant professor at the University of Copenhagen’s Niels Bohr Institute. Edward Bielejec led the Sandia team, and physics professor Mikhail Lukin led the Harvard team.

Appealing defects

Quantum computers, which are still largely hypothetical, exploit the phenomenon of quantum “superposition,” or the counterintuitive ability of small particles to inhabit contradictory physical states at the same time. An electron, for instance, can be said to be in more than one location simultaneously, or to have both of two opposed magnetic orientations.

Where a bit in a conventional computer can represent zero or one, a “qubit,” or quantum bit, can represent zero, one, or both at the same time. It’s the ability of strings of qubits to, in some sense, simultaneously explore multiple solutions to a problem that promises computational speedups.

Diamond-defect qubits result from the combination of “vacancies,” which are locations in the diamond’s crystal lattice where there should be a carbon atom but there isn’t one, and “dopants,” which are atoms of materials other than carbon that have found their way into the lattice. Together, the dopant and the vacancy create a dopant-vacancy “center,” which has free electrons associated with it. The electrons’ magnetic orientation, or “spin,” which can be in superposition, constitutes the qubit.

A perennial problem in the design of quantum computers is how to read information out of qubits. Diamond defects present a simple solution, because they are natural light emitters. In fact, the light particles emitted by diamond defects can preserve the superposition of the qubits, so they could move quantum information between quantum computing devices.

Silicon switch

The most-studied diamond defect is the nitrogen-vacancy center, which can maintain superposition longer than any other candidate qubit. But it emits light in a relatively broad spectrum of frequencies, which can lead to inaccuracies in the measurements on which quantum computing relies.

In their new paper, the MIT, Harvard, and Sandia researchers instead use silicon-vacancy centers, which emit light in a very narrow band of frequencies. They don’t naturally maintain superposition as well, but theory suggests that cooling them down to temperatures in the millikelvin range — fractions of a degree above absolute zero — could solve that problem. (Nitrogen-vacancy-center qubits require cooling to a relatively balmy 4 kelvins.)

To be readable, however, the signals from light-emitting qubits have to be amplified, and it has to be possible to direct them and recombine them to perform computations. That’s why the ability to precisely locate defects is important: It’s easier to etch optical circuits into a diamond and then insert the defects in the right places than to create defects at random and then try to construct optical circuits around them.

In the process described in the new paper, the MIT and Harvard researchers first planed a synthetic diamond down until it was only 200 nanometers thick. Then they etched optical cavities into the diamond’s surface. These increase the brightness of the light emitted by the defects (while shortening the emission times).

Then they sent the diamond to the Sandia team, who have customized a commercial device called the Nano-Implanter to eject streams of silicon ions. The Sandia researchers fired 20 to 30 silicon ions into each of the optical cavities in the diamond and sent it back to Cambridge.

Practicing brain surgery

A report on the simulator that guides trainees through an endoscopic third ventriculostomy (ETV) was published in the Journal of Neurosurgery: Pediatrics on April 25. The procedure uses endoscopes, which are small, computer-guided tubes and instruments, to treat certain forms of hydrocephalus, a condition marked by an excessive accumulation of cerebrospinal fluid and pressure on the brain. ETV is a minimally invasive procedure that short-circuits the fluid back into normal channels in the brain, eliminating the need for implantation of a shunt, a lifelong device with the associated complications of a foreign body.

“For surgeons, the ability to practice a procedure is essential for accurate and safe performance of the procedure. Surgical simulation is akin to a golfer taking a practice swing,” says Alan R. Cohen, M.D., professor of neurosurgery at the Johns Hopkins University School of Medicine and a senior author of the report. “With surgical simulation, we can practice the operation before performing it live.”

While cadavers are the traditional choice for such surgical training, Cohen says they are scarce, expensive, nonreusable, and most importantly, unable to precisely simulate the experience of operating on the problem at hand, which Cohen says requires a special type of hand-eye coordination he dubs “Nintendo Neurosurgery.”

In an effort to create a more reliable, realistic and cost-effective way for surgeons to practice ETV, the research team worked with 3D printing and special effects professionals to create a lifelike, anatomically correct, full-size head and brain with the touch and feel of human skull and brain tissue.

The fusion of 3D printing and special effects resulted in a full-scale reproduction of a 14-year-old child’s head, modeled after a real patient with hydrocephalus, one of the most common problems seen in the field of pediatric neurosurgery. Special features include an electronic pump to reproduce flowing cerebrospinal fluid and brain pulsations. One version of the simulator is so realistic that it has facial features, hair, eyelashes and eyebrows.

To test the model, Cohen and his team randomly paired four neurosurgery fellows and 13 medical residents to perform ETV on either the ultra-realistic simulator or a lower-resolution simulator, which had no hair, lashes or brows.

After completing the simulation, fellows and residents each rated the simulator using a five-point scale. On average, both the surgical fellows and the residents rated the simulator more highly (4.88 out of 5) on its effectiveness for ETV training than on its aesthetic features (4.69). The procedures performed by the trainees were also recorded and later watched and graded by two fully trained neurosurgeons in a way that they could not identify who the trainees were or at what stage they were in their training.

The neurosurgeons assessed the trainees’ performance using criteria such as “flow of operation,” “instrument handling” and “time and motion.”

Neurosurgeons consistently rated the fellows higher than residents on all criteria measured, which accurately reflected their advanced training and knowledge, and demonstrated the simulator’s ability to distinguish between novice and expert surgeons.

New capacity for electronics

That technology is still science fiction, but a new study may bring it closer to reality. A team of researchers from Japan reports in Applied Physics Letters, from AIP Publishing, that they have discovered a phenomenon called the photodielectric effect, which could lead to laser-controlled touch displays.

A number of basic circuit components have been developed beyond their traditional electricity-based designs to instead be controlled with light, such as photo-resistors, photodiodes, and phototransistors. However, there isn’t yet a photo-capacitor.

“A photo-capacitor provides a novel way for operating electronic devices with light,” said Hiroki Taniguchi of the University of Nagoya in Japan. “It will push the evolution of electronics to next-generation photo-electronics.”

Capacitors are basic components for all kinds of electronics, acting somewhat like buckets for electrons that can, for example, store energy or filter unwanted frequencies. Most simply, a capacitor consists of two parallel conducting plates separated by an electrically insulating material, called a dielectric, such as air or glass. Applying a voltage across the plates causes opposing (and equal) charges to build up on both plates.

The dielectric’s properties play a determinate role in the electric field profile between the plates and, in turn, how much energy the capacitor can store. By using light to increase a property of the dielectric called permittivity, Taniguchi and his colleagues hope to create light-controlled capacitors.

Previous researchers have achieved a type of photo-dielectric effect using a variety of materials, but relied on photo-conductance, where light increased the materials electrical conductivity. The rise in conductance, it turns out, leads to greater dielectric permittivity.

But this type of extrinsic photodielectric effect isn’t suitable for practical applications, Taniguchi said. A capacitor must be a good insulator, preventing electrical current from flowing. But under the extrinsic photodielectric effect, a capacitor’s insulating properties deteriorate. In addition, such a capacitor would only work with low-frequency alternating current.

Now Taniguchi and his colleagues have found an intrinsic photodielectric effect in a ceramic with the composition LaAl9.9Zn0.01O3-δ. “We have demonstrated the existence of the photodielectric effect experimentally,” he said.

In their experiments, they shined a light-emitting diode (LED) onto the ceramic and measured its dielectric permittivity, which increased even at high frequencies. But unlike prior experiments that used the extrinsic photodielectric effect, the material remained a good insulator.

The lack of a significant loss means the LED is directly altering the dielectric permittivity of the material, and, in particular, is not increasing conductance, as is the case with the extrinsic effect. It’s still unclear how the intrinsic photodielectric effect works, Taniguchi said, but it may have to do with defects in the material.

Light excites electrons into higher (quantized) energy states, but the quantum states of defects are confined to smaller regions, which may be preventing these photo-excited electrons from traveling far enough to generate an electric current. The hypothesis being that the electrons remain trapped which leads to more electrical insulation of the dielectric material.

Data analysis support public health decision making

“In a real-world outbreak, the time is often too short and the data too limited to build a really accurate model to map disease progression or guide public-health decisions,” said Ashlynn R. Daughton, a graduate research assistant at Los Alamos and doctoral student at University of Colorado, Boulder. She is lead author on a paper out last week in Scientific Reports, a Nature journal. “Our aim is to use existing models with low computational requirements first to explore disease-control measures and second to develop a platform for public-health collaborators to use and provide feedback on models,” she said.

The research draws on Los Alamos’ expertise in computational modeling and health sciences and contributes to the Laboratory’s national security mission by protecting against biological threats. Infectious diseases are a leading cause of death globally. Decisions surrounding how to control an infectious disease outbreak currently rely on a highly subjective process that involves both surveillance and expert opinion. Epidemiological modeling can fill gaps in the decision-making process, she says, by using available data to provide quantitative estimates of outbreak trajectories — determining where the infection is going, and how fast, so medical supplies and staff can be deployed for maximum effect. But if the tool requires unavailable data or overwhelms the capabilities of the health system, it won’t be operationally useful.

Collaboration between the modeling community and public-health policy community enables effective deployment, but the modeling resources need to be connected more strongly with the health community. Such collaboration is rare, as Daughton describes it, resulting in a scarcity of models that truly meet the needs of the public-health community.

Simple, traditional models group people into categories based on their disease status (for example, SIR for Susceptible, Infected or Recovered). “For this initial work, we use a SIR model, modified to include a control measure, to explore many possible disease progression paths. The SIR model was chosen because it is the simplest and requires minimal computational resources,” the paper notes.

Other models are called agent-based, meaning they identify agents, often akin to an “individual,” and map each agent’s potential interactions during a day (for example, an individual might go to school, go to work, and interact with other members of the household). The model then extrapolates how each interaction could spread the disease. Because these are high-resolution models requiring significant expertise and computing power, as well as large quantities of data, they require resources beyond the reach of an average health department.

For this study, using the simpler SIR model, the team explored outbreaks of measles, norovirus and influenza to show the feasibility of its use and describe a research agenda to further promote interactions between decision makers and the modeling community.

“Unlike standard epidemiological models that are disease and location specific and not transferrable or generalizable, this model is disease and location agnostic and can be used at a much higher level for planning purposes regardless of the specific control measure,” said Alina Deshpande, group leader of the Biosecurity and Public Health group at Los Alamos and principal investigator on the project.

Overall, the team determined, there is a clear need in the field to better understand outbreak parameters, underlying model assumptions, and the ways that these apply to real-world scenarios.

Help extend Moores Law

In the world of semiconductor physics, the goal is to devise more efficient and microscopic ways to control and keep track of 0 and 1, the binary codes that all information storage and logic functions in computers are based on.

A new field of physics seeking such advancements is called valleytronics, which exploits the electron’s “valley degree of freedom” for data storage and logic applications. Simply put, valleys are maxima and minima of electron energies in a crystalline solid. A method to control electrons in different valleys could yield new, super-efficient computer chips.

A University at Buffalo team, led by Hao Zeng, PhD, professor in the Department of Physics, worked with scientists around the world to discover a new way to split the energy levels between the valleys in a two-dimensional semiconductor.

The work is described in a study published online today (May 1, 2017) in the journal Nature Nanotechnology.

The key to Zeng’s discovery is the use of a ferromagnetic compound to pull the valleys apart and keep them at different energy levels. This leads to an increase in the separation of valley energies by a factor of 10 more than the one obtained by applying an external magnetic field.

“Normally there are two valleys in these atomically thin semiconductors with exactly the same energy. These are called ‘degenerate energy levels’ in quantum mechanics terms. This limits our ability to control individual valleys. An external magnetic field can be used to break this degeneracy. However, the splitting is so small that you would have to go to the National High Magnetic Field Laboratories to measure a sizable energy difference. Our new approach makes the valleys more accessible and easier to control, and this could allow valleys to be useful for future information storage and processing,” Zeng said.

The simplest way to understand how valleys could be used in processing data may be to think of two valleys side by side. When one valley is occupied by electrons, the switch is “on.” When the other valley is occupied, the switch is “off.” Zeng’s work shows that the valleys can be positioned in such a way that a device can be turned “on” and “off,” with a tiny amount of electricity.

Microscopic ingredients

Zeng and his colleagues created a two-layered heterostructure, with a 10 nanometer thick film of magnetic EuS (europium sulfide) on the bottom and a single layer (less than 1 nanometer) of the transition metal dichalcogenide WSe2 (tungsten diselenide) on top. The magnetic field of the bottom layer forced the energy separation of the valleys in the WSe2.

Previous attempts to separate the valleys involved the application of very large magnetic fields from outside. Zeng’s experiment is believed to be the first time a ferromagnetic material has been used in conjunction with an atomically thin semiconductor material to split its valley energy levels.

Memristor chips that see patterns over pixels

Faster image processing could have big implications for autonomous systems such as self-driving cars, says Wei Lu, U-M professor of electrical engineering and computer science. Lu is lead author of a paper on the work published in the current issue of Nature Nanotechnology.

Lu’s next-generation computer components use pattern recognition to shortcut the energy-intensive process conventional systems use to dissect images. In this new work, he and his colleagues demonstrate an algorithm that relies on a technique called “sparse coding” to coax their 32-by-32 array of memristors to efficiently analyze and recreate several photos.

Memristors are electrical resistors with memory — advanced electronic devices that regulate current based on the history of the voltages applied to them. They can store and process data simultaneously, which makes them a lot more efficient than traditional systems. In a conventional computer, logic and memory functions are located at different parts of the circuit.

“The tasks we ask of today’s computers have grown in complexity,” Lu said. “In this ‘big data’ era, computers require costly, constant and slow communications between their processor and memory to retrieve large amounts data. This makes them large, expensive and power-hungry.”

But like neural networks in a biological brain, networks of memristors can perform many operations at the same time, without having to move data around. As a result, they could enable new platforms that process a vast number of signals in parallel and are capable of advanced machine learning. Memristors are good candidates for deep neural networks, a branch of machine learning, which trains computers to execute processes without being explicitly programmed to do so.

“We need our next-generation electronics to be able to quickly process complex data in a dynamic environment. You can’t just write a program to do that. Sometimes you don’t even have a pre-defined task,” Lu said. “To make our systems smarter, we need to find ways for them to process a lot of data more efficiently. Our approach to accomplish that is inspired by neuroscience.”

A mammal’s brain is able to generate sweeping, split-second impressions of what the eyes take in. One reason is because they can quickly recognize different arrangements of shapes. Humans do this using only a limited number of neurons that become active, Lu says. Both neuroscientists and computer scientists call the process “sparse coding.”

“When we take a look at a chair we will recognize it because its characteristics correspond to our stored mental picture of a chair,” Lu said. “Although not all chairs are the same and some may differ from a mental prototype that serves as a standard, each chair retains some of the key characteristics necessary for easy recognition. Basically, the object is correctly recognized the moment it is properly classified — when ‘stored’ in the appropriate category in our heads.”

Similarly, Lu’s electronic system is designed to detect the patterns very efficiently — and to use as few features as possible to describe the original input.

In our brains, different neurons recognize different patterns, Lu says.

“When we see an image, the neurons that recognize it will become more active,” he said. “The neurons will also compete with each other to naturally create an efficient representation. We’re implementing this approach in our electronic system.”

The researchers trained their system to learn a “dictionary” of images. Trained on a set of grayscale image patterns, their memristor network was able to reconstruct images of famous paintings and photos and other test patterns.

Machine learning create hyper predictive computer models

Drug development is a costly and time-consuming process. To narrow down the number of chemical compounds that could be potential drug candidates, scientists utilize computer models that can predict how a particular chemical compound might interact with a biological target of interest — for example, a key protein that might be involved with a disease process. Traditionally, this is done via quantitative structure-activity relationship (QSAR) modeling and molecular docking, which rely on 2- and 3-D information about those chemicals.

Denis Fourches, assistant professor of computational chemistry, wanted to improve upon the accuracy of these QSAR models. “When you’re screening a set of 30 million compounds, you don’t necessarily need a very high reliability with your model — you’re just getting a ballpark idea about the top 5 or 10 percent of that virtual library. But if you’re attempting to narrow a field of 200 analogues down to 10, which is more commonly the case in drug development, your modeling technique must be extremely accurate. Current techniques are definitely not reliable enough.”

Fourches and Jeremy Ash, a graduate student in bioinformatics, decided to incorporate the results of molecular dynamics calculations — all-atom simulations of how a particular compound moves in the binding pocket of a protein — into prediction models based on machine learning.

“Most models only use the two-dimensional structures of molecules,” Fourches says. “But in reality, chemicals are complex three-dimensional objects that move, vibrate and have dynamic intermolecular interactions with the protein once docked in its binding site. You cannot see that if you just look at the 2-D or 3-D structure of a given molecule.”

In a proof-of-concept study, Fourches and Ash looked at the ERK2 kinase — an enzyme associated with several types of cancer — and a group of 87 known ERK2 inhibitors, ranging from very active to inactive. They ran independent molecular dynamics (MD) simulations for each of those 87 compounds and computed critical information about the flexibility of each compound once in the ERK2 pocket. Then they analyzed the MD descriptors using cheminformatics techniques and machine learning. The MD descriptors were able to accurately distinguish active ERK2 inhibitors from weakly actives and inactives, which was not the case when the models used only 2-D and 3-D structural information.

“We already had data about these 87 molecules and their activity at ERK2,” Fourches says. “So we tested to see if our model was able to reliably find the most active compounds. Indeed, it accurately distinguished between strong and weak ERK2 inhibitors, and because MD descriptors encoded the interactions those compounds create in the pocket of ERK2, it also gave us more insight into why the strong inhibitors worked well.

“Before computing advances allowed us to simulate this kind of data, it would have taken us six months to simulate one single molecule in the pocket of ERK2. Thanks to GPU acceleration, now it only takes three hours. That is a game changer. I’m hopeful that incorporating data extracted from molecular dynamics into QSAR models will enable a new generation of hyper-predictive models that will help bringing novel, effective drugs onto the market even faster. It’s artificial intelligence working for us to discover the drugs of tomorrow.”