Skip to main content

computer learning

Artificial Intelligence Accurately Predicts Protein Folding

Posted on by Dr. Francis Collins

Caption: Researchers used artificial intelligence to map hundreds of new protein structures, including this 3D view of human interleukin-12 (blue) bound to its receptor (purple). Credit: Ian Haydon, University of Washington Institute for Protein Design, Seattle

Proteins are the workhorses of the cell. Mapping the precise shapes of the most important of these workhorses helps to unlock their life-supporting functions or, in the case of disease, potential for dysfunction. While the amino acid sequence of a protein provides the basis for its 3D structure, deducing the atom-by-atom map from principles of quantum mechanics has been beyond the ability of computer programs—until now. 

In a recent study in the journal Science, researchers reported they have developed artificial intelligence approaches for predicting the three-dimensional structure of proteins in record time, based solely on their one-dimensional amino acid sequences [1]. This groundbreaking approach will not only aid researchers in the lab, but guide drug developers in coming up with safer and more effective ways to treat and prevent disease.

This new NIH-supported advance is now freely available to scientists around the world. In fact, it has already helped to solve especially challenging protein structures in cases where experimental data were lacking and other modeling methods hadn’t been enough to get a final answer. It also can now provide key structural information about proteins for which more time-consuming and costly imaging data are not yet available.

The new work comes from a group led by David Baker and Minkyung Baek, University of Washington, Seattle, Institute for Protein Design. Over the course of the pandemic, Baker’s team has been working hard to design promising COVID-19 therapeutics. They’ve also been working to design proteins that might offer promising new ways to treat cancer and other conditions. As part of this effort, they’ve developed new computational approaches for determining precisely how a chain of amino acids, which are the building blocks of proteins, will fold up in space to form a finished protein.

But the ability to predict a protein’s precise structure or shape from its sequence alone had proven to be a difficult problem to solve despite decades of effort. In search of a solution, research teams from around the world have come together every two years since 1994 at the Critical Assessment of Structure Prediction (CASP) meetings. At these gatherings, teams compete against each other with the goal of developing computational methods and software capable of predicting any of nature’s 200 million or more protein structures from sequences alone with the greatest accuracy.

Last year, a London-based company called DeepMind shook up the structural biology world with their entry into CASP called AlphaFold. (AlphaFold was one of Science’s 2020 Breakthroughs of the Year.) They showed that their artificial intelligence approach—which took advantage of the 170,000 proteins with known structures in a reiterative process called deep learning—could predict protein structure with amazing accuracy. In fact, it could predict most protein structures almost as accurately as other high-resolution protein mapping techniques, including today’s go-to strategies of X-ray crystallography and cryo-EM.

The DeepMind performance showed what was possible, but because the advances were made by a world-leading deep learning company, the details on how it worked weren’t made publicly available at the time. The findings left Baker, Baek, and others eager to learn more and to see if they could replicate the impressive predictive ability of AlphaFold outside of such a well-resourced company.

In the new work, Baker and Baek’s team has made stunning progress—using only a fraction of the computational processing power and time required by AlphaFold. The new software, called RoseTTAFold, also relies on a deep learning approach. In deep learning, computers look for patterns in large collections of data. As they begin to recognize complex relationships, some connections in the network are strengthened while others are weakened. The finished network is typically composed of multiple information-processing layers, which operate on the data to return a result—in this case, a protein structure.

Given the complexity of the problem, instead of using a single neural network, RoseTTAFold relies on three. The three-track neural network integrates and simultaneously processes one-dimensional protein sequence information, two-dimensional information about the distance between amino acids, and three-dimensional atomic structure all at once. Information from these separate tracks flows back and forth to generate accurate models of proteins rapidly from sequence information alone, including structures in complex with other proteins.

As soon as the researchers had what they thought was a reasonable working approach to solve protein structures, they began sharing it with their structural biologist colleagues. In many cases, it became immediately clear that RoseTTAFold worked remarkably well. What’s more, it has been put to work to solve challenging structural biology problems that had vexed scientists for many years with earlier methods.

RoseTTAFold already has solved hundreds of new protein structures, many of which represent poorly understood human proteins. The 3D rendering of a complex showing a human protein called interleukin-12 in complex with its receptor (above image) is just one example. The researchers have generated other structures directly relevant to human health, including some that are related to lipid metabolism, inflammatory conditions, and cancer. The program is now available on the web and has been downloaded by dozens of research teams around the world.

Cryo-EM and other experimental mapping methods will remain essential to solve protein structures in the lab. But with the artificial intelligence advances demonstrated by RoseTTAFold and AlphaFold, which has now also been released in an open-source version and reported in the journal Nature [2], researchers now can make the critical protein structure predictions at their desktops. This newfound ability will be a boon to basic science studies and has great potential to speed life-saving therapeutic advances.

References:

[1] Accurate prediction of protein structures and interactions using a three-track neural network. Baek M, DiMaio F, Anishchenko I, Dauparas J, Grishin NV, Adams PD, Read RJ, Baker D., et al. Science. 2021 Jul 15:eabj8754.

[2] Highly accurate protein structure prediction with AlphaFold. Jumper J, Evans R, Pritzel A, Green T, Senior AW, Kavukcuoglu K, Kohli P, Hassabis D. et al. Nature. 2021 Jul 15.

Links:

Structural Biology (National Institute of General Medical Sciences/NIH)

The Structures of Life (NIGMS)

Baker Lab (University of Washington, Seattle)

CASP 14 (University of California, Davis)

NIH Support: National Institute of Allergy and Infectious Diseases; National Institute of General Medical Sciences


Artificial Intelligence Speeds Brain Tumor Diagnosis

Posted on by Dr. Francis Collins

Real time diagnostics in the operating room
Caption: Artificial intelligence speeds diagnosis of brain tumors. Top, doctor reviews digitized tumor specimen in operating room; left, the AI program predicts diagnosis; right, surgeons review results in near real-time.
Credit: Joe Hallisy, Michigan Medicine, Ann Arbor

Computers are now being trained to “see” the patterns of disease often hidden in our cells and tissues. Now comes word of yet another remarkable use of computer-generated artificial intelligence (AI): swiftly providing neurosurgeons with valuable, real-time information about what type of brain tumor is present, while the patient is still on the operating table.

This latest advance comes from an NIH-funded clinical trial of 278 patients undergoing brain surgery. The researchers found they could take a small tumor biopsy during surgery, feed it into a trained computer in the operating room, and receive a diagnosis that rivals the accuracy of an expert pathologist.

Traditionally, sending out a biopsy to an expert pathologist and getting back a diagnosis optimally takes about 40 minutes. But the computer can do it in the operating room on average in under 3 minutes. The time saved helps to inform surgeons how to proceed with their delicate surgery and make immediate and potentially life-saving treatment decisions to assist their patients.

As reported in Nature Medicine, researchers led by Daniel Orringer, NYU Langone Health, New York, and Todd Hollon, University of Michigan, Ann Arbor, took advantage of AI and another technological advance called stimulated Raman histology (SRH). The latter is an emerging clinical imaging technique that makes it possible to generate detailed images of a tissue sample without the usual processing steps.

The SRH technique starts off by bouncing laser light rapidly through a tissue sample. This light enables a nearby fiberoptic microscope to capture the cellular and structural details within the sample. Remarkably, it does so by picking up on subtle differences in the way lipids, proteins, and nucleic acids vibrate when exposed to the light.

Then, using a virtual coloring program, the microscope quickly pieces together and colors in the fine structural details, pixel by pixel. The result: a high-resolution, detailed image that you might expect from a pathology lab, minus the staining of cells, mounting of slides, and the other time-consuming processing procedures.

To interpret the SRH images, the researchers turned to computers and machine learning. To teach a computer, it must be fed large datasets of examples in order to learn how to perform a given task. In this case, they used a special class of machine learning called deep neural networks, or deep learning. It’s inspired by the way neural networks in the human brain process information.

In deep learning, computers look for patterns in large collections of data. As they begin to recognize complex relationships, some connections in the network are strengthened while others are weakened. The finished network is typically composed of multiple information-processing layers, which operate on the data to return a result, in this case a brain tumor diagnosis.

The team trained the computer to classify tissues samples into one of 13 categories commonly found in a brain tumor sample. Those categories included the most common brain tumors: malignant glioma, lymphoma, metastatic tumors, and meningioma. The training was based on more than 2.5 million labeled images representing samples from 415 patients.

Next, they put the machine to the test. The researchers split each of 278 brain tissue samples into two specimens. One was sent to a conventional pathology lab for prepping and diagnosis. The other was imaged with SRH, and then the trained machine made a diagnosis.

Overall, the machine’s performance was quite impressive, returning the right answer about 95 percent of the time. That’s compared to an accuracy of 94 percent for conventional pathology.

Interestingly, the machine made a correct diagnosis in all 17 cases that a pathologist got wrong. Likewise, the pathologist got the right answer in all 14 cases in which the machine slipped up.

The findings show that the combination of SRH and AI can be used to make real-time predictions of a patient’s brain tumor diagnosis to inform surgical decision-making. That may be especially important in places where expert neuropathologists are hard to find.

Ultimately, the researchers suggest that AI may yield even more useful information about a tumor’s underlying molecular alterations, adding ever greater precision to the diagnosis. Similar approaches are also likely to work in supplying timely information to surgeons operating on patients with other cancers too, including cancers of the skin and breast. The research team has made a brief video to give you a more detailed look at the new automated tissue-to-diagnosis pipeline.

Reference:

[1] Near real-time intraoperative brain tumor diagnosis using stimulated Raman histology and deep neural networks. Hollon TC, Pandian B, Adapa AR, Urias E, Save AV, Khalsa SSS, Eichberg DG, D’Amico RS, Farooq ZU, Lewis S, Petridis PD, Marie T, Shah AH, Garton HJL, Maher CO, Heth JA, McKean EL, Sullivan SE, Hervey-Jumper SL, Patil PG, Thompson BG, Sagher O, McKhann GM 2nd, Komotar RJ, Ivan ME, Snuderl M, Otten ML, Johnson TD, Sisti MB, Bruce JN, Muraszko KM, Trautman J, Freudiger CW, Canoll P, Lee H, Camelo-Piragua S, Orringer DA. Nat Med. 2020 Jan 6.

Links:

Video: Artificial Intelligence: Collecting Data to Maximize Potential (NIH)

New Imaging Technique Allows Quick, Automated Analysis of Brain Tumor Tissue During Surgery (National Institute of Biomedical Imaging and Bioengineering/NIH)

Daniel Orringer (NYU Langone, Perlmutter Cancer Center, New York City)

Todd Hollon (University of Michigan, Ann Arbor)

NIH Support: National Cancer Institute; National Institute of Biomedical Imaging and Bioengineering


Can a Mind-Reading Computer Speak for Those Who Cannot?

Posted on by Dr. Francis Collins

Credit: Adapted from Nima Mesgarani, Columbia University’s Zuckerman Institute, New York

Computers have learned to do some amazing things, from beating the world’s ranking chess masters to providing the equivalent of feeling in prosthetic limbs. Now, as heard in this brief audio clip counting from zero to nine, an NIH-supported team has combined innovative speech synthesis technology and artificial intelligence to teach a computer to read a person’s thoughts and translate them into intelligible speech.

Turning brain waves into speech isn’t just fascinating science. It might also prove life changing for people who have lost the ability to speak from conditions such as amyotrophic lateral sclerosis (ALS) or a debilitating stroke.

When people speak or even think about talking, their brains fire off distinctive, but previously poorly decoded, patterns of neural activity. Nima Mesgarani and his team at Columbia University’s Zuckerman Institute, New York, wanted to learn how to decode this neural activity.

Mesgarani and his team started out with a vocoder, a voice synthesizer that produces sounds based on an analysis of speech. It’s the very same technology used by Amazon’s Alexa, Apple’s Siri, or other similar devices to listen and respond appropriately to everyday commands.

As reported in Scientific Reports, the first task was to train a vocoder to produce synthesized sounds in response to brain waves instead of speech [1]. To do it, Mesgarani teamed up with neurosurgeon Ashesh Mehta, Hofstra Northwell School of Medicine, Manhasset, NY, who frequently performs brain mapping in people with epilepsy to pinpoint the sources of seizures before performing surgery to remove them.

In five patients already undergoing brain mapping, the researchers monitored activity in the auditory cortex, where the brain processes sound. The patients listened to recordings of short stories read by four speakers. In the first test, eight different sentences were repeated multiple times. In the next test, participants heard four new speakers repeat numbers from zero to nine.

From these exercises, the researchers reconstructed the words that people heard from their brain activity alone. Then the researchers tried various methods to reproduce intelligible speech from the recorded brain activity. They found it worked best to combine the vocoder technology with a form of computer artificial intelligence known as deep learning.

Deep learning is inspired by how our own brain’s neural networks process information, learning to focus on some details but not others. In deep learning, computers look for patterns in data. As they begin to “see” complex relationships, some connections in the network are strengthened while others are weakened.

In this case, the researchers used the deep learning networks to interpret the sounds produced by the vocoder in response to the brain activity patterns. When the vocoder-produced sounds were processed and “cleaned up” by those neural networks, it made the reconstructed sounds easier for a listener to understand as recognizable words, though this first attempt still sounds pretty robotic.

The researchers will continue testing their system with more complicated words and sentences. They also want to run the same tests on brain activity, comparing what happens when a person speaks or just imagines speaking. They ultimately envision an implant, similar to those already worn by some patients with epilepsy, that will translate a person’s thoughts into spoken words. That might open up all sorts of awkward moments if some of those thoughts weren’t intended for transmission!

Along with recently highlighted new ways to catch irregular heartbeats and cervical cancers, it’s yet another remarkable example of the many ways in which computers and artificial intelligence promise to transform the future of medicine.

Reference:

[1] Towards reconstructing intelligible speech from the human auditory cortex. Akbari H, Khalighinejad B, Herrero JL, Mehta AD, Mesgarani N. Sci Rep. 2019 Jan 29;9(1):874.

Links:

Advances in Neuroprosthetic Learning and Control. Carmena JM. PLoS Biol. 2013;11(5):e1001561.

Nima Mesgarani (Columbia University, New York)

NIH Support: National Institute on Deafness and Other Communication Disorders; National Institute of Mental Health


Using Artificial Intelligence to Detect Cervical Cancer

Posted on by Dr. Francis Collins

Doctor reviewing cell phone
Credit: gettyimages/Dean Mitchell

My last post highlighted the use of artificial intelligence (AI) to create an algorithm capable of detecting 10 different kinds of irregular heart rhythms. But that’s just one of the many potential medical uses of AI. In this post, I’ll tell you how NIH researchers are pairing AI analysis with smartphone cameras to help more women avoid cervical cancer.

In work described in the Journal of the National Cancer Institute [1], researchers used a high-performance computer to analyze thousands of cervical photographs, obtained more than 20 years ago from volunteers in a cancer screening study. The computer learned to recognize specific patterns associated with pre-cancerous and cancerous changes of the cervix, and that information was used to develop an algorithm for reliably detecting such changes in the collection of images. In fact, the AI-generated algorithm outperformed human expert reviewers and all standard screening tests in detecting pre-cancerous changes.

Nearly all cervical cancers are caused by the human papillomavirus (HPV). Cervical cancer screening—first with Pap smears and now also using HPV testing—have greatly reduced deaths from cervical cancer. But this cancer still claims the lives of more than 4,000 U.S. women each year, with higher frequency among women who are black or older [2]. Around the world, more than a quarter-million women die of this preventable disease, mostly in poor and remote areas [3].

These troubling numbers have kept researchers on the lookout for low cost, but easy-to-use, tools that could be highly effective at detecting HPV infections most likely to advance to cervical cancer. Such tools would also need to work well in areas with limited resources for sample preparation and lab analysis. That’s what led to this collaboration involving researchers from NIH’s National Cancer Institute (NCI) and Global Good, Bellevue, WA, which is an Intellectual Ventures collaboration with Bill Gates to invent life-changing technologies for the developing world.

Global Good researchers contacted NCI experts hoping to apply AI to a large dataset of cervical images. The NCI experts suggested an 18-year cervical cancer screening study in Costa Rica. The NCI-supported project, completed in the 1990s, generated nearly 60,000 cervical images, later digitized by NIH’s National Library of Medicine and stored away safely.

The researchers agreed that all these images, obtained in a highly standardized way, would serve as perfect training material for a computer to develop a detection algorithm for cervical cancer. This type of AI, called machine learning, involves feeding tens of thousands of images into a computer equipped with one or more high-powered graphics processing units (GPUs), similar to something you’d find in an Xbox or PlayStation. The GPUs allow the computer to crunch large sets of visual data in the images and devise a set of rules, or algorithms, that allow it to learn to “see” physical features.

Here’s how they did it. First, the researchers got the computer to create a convolutional neural network. That’s a fancy way of saying that they trained it to read images, filter out the millions of non-essential bytes, and retain the few hundred bytes in the photo that make it uniquely identifiable. They fed 1.28 million color images covering hundreds of common objects into the computer to create layers of processing ability that, like the human visual system, can distinguish objects and their qualities.

Once the convolutional neural network was formed, the researchers took the next big step: training the system to see the physical properties of a healthy cervix, a cervix with worrisome cellular changes, or a cervix with pre-cancer. That’s where the thousands of cervical images from the Costa Rican screening trial literally entered the picture.

When all these layers of processing ability were formed, the researchers had created the “automated visual evaluation” algorithm. It went on to identify with remarkable accuracy the images associated with the Costa Rican study’s 241 known precancers and 38 known cancers. The algorithm’s few minor hiccups came mainly from suboptimal images with faded colors or slightly blurred focus.

These minor glitches have the researchers now working hard to optimize the process, including determining how health workers can capture good quality photos of the cervix with a smartphone during a routine pelvic exam and how to outfit smartphones with the necessary software to analyze cervical photos quickly in real-world settings. The goal is to enable health workers to use a smartphone or similar device to provide women with cervical screening and treatment during a single visit.

In fact, the researchers are already field testing their AI-inspired approach on smartphones in the United States and abroad. If all goes well, this low-cost, mobile approach could provide a valuable new tool to help reduce the burden of cervical cancer among underserved populations.

The day that cervical cancer no longer steals the lives of hundreds of thousands of women a year worldwide will be a joyful moment for cancer researchers, as well as a major victory for women’s health.

References:

[1] An observational study of Deep Learning and automated evaluation of cervical images for cancer screening. Hu L, Bell D, Antani S, Xue Z, Yu K, Horning MP, Gachuhi N, Wilson B, Jaiswal MS, Befano B, Long LR, Herrero R, Einstein MH, Burk RD, Demarco M, Gage JC, Rodriguez AC, Wentzensen N, Schiffman M. J Natl Cancer Inst. 2019 Jan 10. [Epub ahead of print]

[2] “Study: Death Rate from Cervical Cancer Higher Than Thought,” American Cancer Society, Jan. 25, 2017.

[3] “World Cancer Day,” World Health Organization, Feb. 2, 2017.

Links:

Cervical Cancer (National Cancer Institute/NIH)

Global Good (Intellectual Ventures, Bellevue, WA)

NIH Support: National Cancer Institute; National Library of Medicine


Using Artificial Intelligence to Catch Irregular Heartbeats

Posted on by Dr. Francis Collins

ECG Readout
Credit: gettyimages/enot-poloskun

Thanks to advances in wearable health technologies, it’s now possible for people to monitor their heart rhythms at home for days, weeks, or even months via wireless electrocardiogram (EKG) patches. In fact, my Apple Watch makes it possible to record a real-time EKG whenever I want. (I’m glad to say I am in normal sinus rhythm.)

For true medical benefit, however, the challenge lies in analyzing the vast amounts of data—often hundreds of hours worth per person—to distinguish reliably between harmless rhythm irregularities and potentially life-threatening problems. Now, NIH-funded researchers have found that artificial intelligence (AI) can help.

A powerful computer “studied” more than 90,000 EKG recordings, from which it “learned” to recognize patterns, form rules, and apply them accurately to future EKG readings. The computer became so “smart” that it could classify 10 different types of irregular heart rhythms, including atrial fibrillation (AFib). In fact, after just seven months of training, the computer-devised algorithm was as good—and in some cases even better than—cardiology experts at making the correct diagnostic call.

EKG tests measure electrical impulses in the heart, which signal the heart muscle to contract and pump blood to the rest of the body. The precise, wave-like features of the electrical impulses allow doctors to determine whether a person’s heart is beating normally.

For example, in people with AFib, the heart’s upper chambers (the atria) contract rapidly and unpredictably, causing the ventricles (the main heart muscle) to contract irregularly rather than in a steady rhythm. This is an important arrhythmia to detect, even if it may only be present occasionally over many days of monitoring. That’s not always easy to do with current methods.

Here’s where the team, led by computer scientists Awni Hannun and Andrew Ng, Stanford University, Palo Alto, CA, saw an AI opportunity. As published in Nature Medicine, the Stanford team started by assembling a large EKG dataset from more than 53,000 people [1]. The data included various forms of arrhythmia and normal heart rhythms from people who had worn the FDA-approved Zio patch for about two weeks.

The Zio patch is a 2-by-5-inch adhesive patch, worn much like a bandage, on the upper left side of the chest. It’s water resistant and can be kept on around the clock while a person sleeps, exercises, or takes a shower. The wireless patch continuously monitors heart rhythms, storing EKG data for later analysis.

The Stanford researchers looked to machine learning to process all the EKG data. In machine learning, computers rely on large datasets of examples in order to learn how to perform a given task. The accuracy improves as the machine “sees” more data.

But the team’s real interest was in utilizing a special class of machine learning called deep neural networks, or deep learning. Deep learning is inspired by how our own brain’s neural networks process information, learning to focus on some details but not others.

In deep learning, computers look for patterns in data. As they begin to “see” complex relationships, some connections in the network are strengthened while others are weakened. The network is typically composed of multiple information-processing layers, which operate on the data and compute increasingly complex and abstract representations.

Those data reach the final output layer, which acts as a classifier, assigning each bit of data to a particular category or, in the case of the EKG readings, a diagnosis. In this way, computers can learn to analyze and sort highly complex data using both more obvious and hidden features.

Ultimately, the computer in the new study could differentiate between EKG readings representing 10 different arrhythmias as well as a normal heart rhythm. It could also tell the difference between irregular heart rhythms and background “noise” caused by interference of one kind or another, such as a jostled or disconnected Zio patch.

For validation, the computer attempted to assign a diagnosis to the EKG readings of 328 additional patients. Independently, several expert cardiologists also read those EKGs and reached a consensus diagnosis for each patient. In almost all cases, the computer’s diagnosis agreed with the consensus of the cardiologists. The computer also made its calls much faster.

Next, the researchers compared the computer’s diagnoses to those of six individual cardiologists who weren’t part of the original consensus committee. And, the results show that the computer actually outperformed these experienced cardiologists!

The findings suggest that artificial intelligence can be used to improve the accuracy and efficiency of EKG readings. In fact, Hannun reports that iRhythm Technologies, maker of the Zio patch, has already incorporated the algorithm into the interpretation now being used to analyze data from real patients.

As impressive as this is, we are surely just at the beginning of AI applications to health and health care. In recognition of the opportunities ahead, NIH has recently launched a working group on AI to explore ways to make the best use of existing data, and harness the potential of artificial intelligence and machine learning to advance biomedical research and the practice of medicine.

Meanwhile, more and more impressive NIH-supported research featuring AI is being published. In my next blog, I’ll highlight a recent paper that uses AI to make a real difference for cervical cancer, particularly in low resource settings.

Reference:

[1] Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Hannun AY, Rajpurkar P, Haghpanahi M, Tison GH, Bourn C, Turakhia MP, Ng AY.
Nat Med. 2019 Jan;25(1):65-69.

Links:

Arrhythmia (National Heart, Lung, and Blood Institute/NIH)

Video: Artificial Intelligence: Collecting Data to Maximize Potential (NIH)

Andrew Ng (Palo Alto, CA)

NIH Support: National Heart, Lung, and Blood Institute


Teaching Computers to “See” the Invisible in Living Cells

Posted on by Dr. Francis Collins

Brain Cell Analysis
Caption: While analyzing brain cells, a computer program “thinks” about which cellular structure to identify.
Credit: Steven Finkbeiner, University of California, San Francisco and the Gladstone Institutes

For centuries, scientists have trained themselves to look through microscopes and carefully study their structural and molecular features. But those long hours bent over a microscope poring over microscopic images could be less necessary in the years ahead. The job of analyzing cellular features could one day belong to specially trained computers.

In a new study published in the journal Cell, researchers trained computers by feeding them paired sets of fluorescently labeled and unlabeled images of brain tissue millions of times in a row [1]. This allowed the computers to discern patterns in the images, form rules, and apply them to viewing future images. Using this so-called deep learning approach, the researchers demonstrated that the computers not only learned to recognize individual cells, they also developed an almost superhuman ability to identify the cell type and whether a cell was alive or dead. Even more remarkable, the trained computers made all those calls without any need for harsh chemical labels, including fluorescent dyes or stains, which researchers normally require to study cells. In other words, the computers learned to “see” the invisible!


Autism Spectrum Disorder: Progress Toward Earlier Diagnosis

Posted on by Dr. Francis Collins

Sleeping baby

Stockbyte

Research shows that the roots of autism spectrum disorder (ASD) generally start early—most likely in the womb. That’s one more reason, on top of a large number of epidemiological studies, why current claims about the role of vaccines in causing autism can’t be right. But how early is ASD detectable? It’s a critical question, since early intervention has been shown to help limit the effects of autism. The problem is there’s currently no reliable way to detect ASD until around 18–24 months, when the social deficits and repetitive behaviors associated with the condition begin to appear.

Several months ago, an NIH-funded team offered promising evidence that it may be possible to detect ASD in high-risk 1-year-olds by shifting attention from how kids act to how their brains have grown [1]. Now, new evidence from that same team suggests that neurological signs of ASD might be detectable even earlier.