Skip to main content

Cool Videos

Experiencing the Neural Symphony Underlying Memory through a Blend of Science and Art

Posted on by John Ngai, PhD, NIH BRAIN Initiative

Ever wonder how you’re able to remember life events that happened days, months, or even years ago? You have your hippocampus to thank. This essential area in the brain relies on intense and highly synchronized patterns of activity that aren’t found anywhere else in the brain. They’re called “sharp-wave ripples.”

These dynamic ripples have been likened to the brain version of an instant replay, appearing most commonly during rest after a notable experience. And, now, the top video winner in this year’s Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative’s annual Show Us Your BRAINs! Photo and Video Contest allows you to witness the “chatter” that those ripples set off in other neurons. The details of this chatter determine just how durable a particular memory is in ways neuroscientists are still working hard to understand.

Neuroscientist Saman Abbaspoor in the lab of Kari Hoffman at Vanderbilt University, Nashville, in collaboration with Tyler Sloan from the Montreal-based Quorumetrix Studio, sets the stage in the winning video by showing an electrode or probe implanted in the brain that can reach the hippocampus. This device allows the Hoffman team to wirelessly record neural activity in different layers of the hippocampus as the animal either rests or moves freely about.

In the scenes that follow, neurons (blue, cyan, and yellow) flash on and off. The colors highlight the fact that this brain area and the neurons within it aren’t all the same. Various types of neurons are found in the brain area’s different layers, some of which spark the activity you see, while others dampen it.

Hoffman explains that the specific shapes of individual cells pictured are realistic but also symbolic. While they didn’t trace the individual branches of neurons in the brain in their studies, they relied on information from previous anatomical studies, overlaying their intricate forms with flashing bursts of activity that come straight from their recorded data.

Sloan then added yet another layer of artistry to the experience with what he refers to as sonification, or the use of music to convey information about the dynamic and coordinated bursts of activity in those cells. At five seconds in, you hear the subtle flutter of a sharp-wave ripple. With each burst of active neural chatter that follows, you hear the dramatic plink of piano keys.

Together, their winning video creates a unique sensory experience that helps to explain what goes on during memory formation and recall in a way that words alone can’t adequately describe. Through their ongoing studies, Hoffman reports that they’ll continue delving even deeper into understanding these intricate dynamics and their implications for learning and memory. Ultimately, they also want to explore how brain ripples, and the neural chatter they set off, might be enhanced to make memory formation and recall even stronger.

References:

S Abbaspoor & KL Hoffman. State-dependent circuit dynamics of superficial and deep CA1 pyramidal cells in macaques. BioRxiv DOI: 10.1101/2023.12.06.570369 (2023). Please note that this article is a pre-print and has not been peer-reviewed.

NIH Support: The NIH BRAIN Initiative

This article was updated on Dec. 15, 2023 to reflect better the collaboration on the project among Abbaspoor, Hoffman and Sloan.


3D Animation Captures Viral Infection in Action

Posted on by Lawrence Tabak, D.D.S., Ph.D.

With the summer holiday season now in full swing, the blog will also swing into its annual August series. For most of the month, I will share with you just a small sampling of the colorful videos and snapshots of life captured in a select few of the hundreds of NIH-supported research labs around the country.

To get us started, let’s turn to the study of viruses. Researchers now can generate vast amounts of data relatively quickly on a virus of interest. But data are often displayed as numbers or two-dimensional digital images on a computer screen. For most virologists, it’s extremely helpful to see a virus and its data streaming in three dimensions. To do so, they turn to a technological tool that we all know so well: animation.

This research animation features the chikungunya virus, a sometimes debilitating, mosquito-borne pathogen transmitted mainly in developing countries in Africa, Asia and the Americas. The animation illustrates large amounts of research data to show how the chikungunya virus infects our cells and uses its specialized machinery to release its genetic material into the cell and seed future infections. Let’s take a look. 

In the opening seconds, you see how receptor binding glycoproteins (light blue), which are proteins with a carbohydrate attached on the viral surface, dock with protein receptors (yellow) on a host cell. At five seconds, the virus is drawn inside the cell. The change in the color of the chikungunya particle shows that it’s coated in a vesicle, which helps the virus make its way unhindered through the cytoplasm. 

At 10 seconds, the virus then enters an endosome, ubiquitous bubble-like compartments that transport material from outside the cell into the cytosol, the fluid part of the cytoplasm. Once inside the endosome, the acidic environment makes other glycoproteins (red, blue, yellow) on the viral surface change shape and become more flexible and dynamic. These glycoproteins serve as machinery that enables them to reach out and grab onto the surrounding endosome membrane, which ultimately will be fused with the virus’s own membrane.

As more of those fusion glycoproteins grab on, fold back on themselves, and form into hairpin-like shapes, they pull the membranes together. The animation illustrates not only the changes in protein organization, but the resulting effects on the integrity of the membrane structures as this dynamic process proceeds. At 53 seconds, the viral protein shell, or capsid (green), which contains the virus’ genetic instructions, is released back out into the cell where it will ultimately go on to make more virus.

This remarkable animation comes from Margot Riggi and Janet Iwasa, experts in visualizing biology at the University of Utah’s Animation Lab, Salt Lake City. Their data source was researcher Kelly Lee, University of Washington, Seattle, who collaborated closely with Riggi and Iwasa on this project. The final product was considered so outstanding that it took the top prize for short videos in the 2022 BioArt Awards competition, sponsored by the Federation of American Societies for Experimental Biology (FASEB).

The Lee lab uses various research methods to understand the specific shape-shifting changes that chikungunya and other viruses perform as they invade and infect cells. One of the lab’s key visual tools is cryo-electron microscopy (Cryo-EM), specifically cryo-electron tomography (cryo-ET). Cryto-ET enables complex 3D structures, including the intermediate state of biological reactions, to be captured and imaged in remarkably fine detail.

In a study in the journal Nature Communications [1] last year, Lee’s team used cryo-ET to reveal how the chikungunya virus invades and delivers its genetic cargo into human cells to initiate a new infection. While Lee’s cryo-ET data revealed stages of the virus entry process and fine structural details of changes to the virus as it enters a cell and starts an infection, it still represented a series of snapshots with missing steps in between. So, Lee’s lab teamed up with The Animation Lab to help beautifully fill in the gaps.

Visualizing chikungunya and similar viruses in action not only makes for informative animations, it helps researchers discover better potential targets to intervene in this process. This basic research continues to make progress, and so do ongoing efforts to develop a chikungunya vaccine [2] and specific treatments that would help give millions of people relief from the aches, pains, and rashes associated with this still-untreatable infection.

References:

[1] Visualization of conformational changes and membrane remodeling leading to genome delivery by viral class-II fusion machinery. Mangala Prasad V, Blijleven JS, Smit JM, Lee KK. Nat Commun. 2022 Aug 15;13(1):4772. doi: 10.1038/s41467-022-32431-9. PMID: 35970990; PMCID: PMC9378758.

[2] Experimental chikungunya vaccine is safe and well-tolerated in early trial, National Institute of Allergy and Infectious Diseases news release, April 27, 2020.

Links:

Chikungunya Virus (Centers for Disease Control and Prevention, Atlanta)

Global Arbovirus Initiative (World Health Organization, Geneva, Switzerland)

The Animation Lab (University of Utah, Salt Lake City)

Video: Janet Iwasa (TED Speaker)

Lee Lab (University of Washington, Seattle)

BioArt Awards (Federation of American Societies for Experimental Biology, Rockville, MD)

NIH Support: National Institute of General Medical Sciences; National Institute of Allergy and Infectious Diseases


The Amazing Brain: Capturing Neurons in Action

Posted on by Lawrence Tabak, D.D.S., Ph.D.

Credit: Andreas Tolias, Baylor College of Medicine, Houston

With today’s powerful imaging tools, neuroscientists can monitor the firing and function of many distinct neurons in our brains, even while we move freely about. They also possess another set of tools to capture remarkable, high-resolution images of the brain’s many thousands of individual neurons, tracing the form of each intricate branch of their tree-like structures.

Most brain imaging approaches don’t capture neural form and function at once. Yet that’s precisely what you’re seeing in this knockout of a movie, another winner in the Show Us Your BRAINs! Photo and Video Contest, supported by NIH’s Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative.

This first-of-its kind look into the mammalian brain produced by Andreas Tolias, Baylor College of Medicine, Houston, and colleagues features about 200 neurons in the visual cortex, which receives and processes visual information. First, you see a colorful, tightly packed network of neurons. Then, those neurons, which were colorized by the researchers in vibrant pinks, reds, blues, and greens, pull apart to reveal their finely detailed patterns and shapes. Throughout the video, you can see neural activity, which appears as flashes of white that resemble lightning bolts.

Making this movie was a multi-step process. First, the Tolias group presented laboratory mice with a series of visual cues, using a functional imaging approach called two-photon calcium imaging to record the electrical activity of individual neurons. While this technique allowed the researchers to pinpoint the precise locations and activity of each individual neuron in the visual cortex, they couldn’t zoom in to see their precise structures.

So, the Baylor team sent the mice to colleagues Nuno da Costa and Clay Reid, Allen Institute for Brain Science, Seattle, who had the needed electron microscopes and technical expertise to zoom in on these structures. Their data allowed collaborator Sebastian Seung’s team, Princeton University, Princeton, NJ, to trace individual neurons in the visual cortex along their circuitous paths. Finally, they used sophisticated machine learning algorithms to carefully align the two imaging datasets and produce this amazing movie.

This research was supported by Intelligence Advanced Research Projects Activity (IARPA), part of the Office of the Director of National Intelligence. The IARPA is one of NIH’s governmental collaborators in the BRAIN Initiative.

Tolias and team already are making use of their imaging data to learn more about the precise ways in which individual neurons and groups of neurons in the mouse visual cortex integrate visual inputs to produce a coherent view of the animals’ surroundings. They’ve also collected an even-larger data set, scaling their approach up to tens of thousands of neurons. Those data are now freely available to other neuroscientists to help advance their work. As researchers make use of these and similar data, this union of neural form and function will surely yield new high-resolution discoveries about the mammalian brain.

Links:

Tolias Lab (Baylor College of Medicine, Houston)

Nuno da Costa (Allen Institute for Brain Science, Seattle)

R. Clay Reid (Allen Institute)

H. Sebastian Seung (Princeton University, Princeton, NJ)

Machine Intelligence from Cortical Networks (MICrONS) Explorer

Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

Show Us Your BRAINs Photo & Video Contest (BRAIN Initiative)

NIH Support: BRAIN Initiative; Common Fund


Groundbreaking Study Maps Key Brain Circuit

Posted on by Dr. Francis Collins

Biologists have long wondered how neurons from different regions of the brain actually interconnect into integrated neural networks, or circuits. A classic example is a complex master circuit projecting across several regions of the vertebrate brain called the basal ganglia. It’s involved in many fundamental brain processes, such as controlling movement, thought, and emotion.

In a paper published recently in the journal Nature, an NIH-supported team working in mice has created a wiring diagram, or connectivity map, of a key component of this master circuit that controls voluntary movement. This groundbreaking map will guide the way for future studies of the basal ganglia’s direct connections with the thalamus, which is a hub for information going to and from the spinal cord, as well as its links to the motor cortex in the front of the brain, which controls voluntary movements.

This 3D animation drawn from the paper’s findings captures the biological beauty of these intricate connections. It starts out zooming around four of the six horizontal layers of the motor cortex. At about 6 seconds in, the video focuses on nerve cell projections from the thalamus (blue) connecting to cortex nerve cells that provide input to the basal ganglia (green). It also shows connections to the cortex nerve cells that input to the thalamus (red).

At about 25 seconds, the video scans back to provide a quick close-up of the cell bodies (green and red bulges). It then zooms out to show the broader distribution of nerve cells within the cortex layers and the branched fringes of corticothalamic nerve cells (red) at the top edge of the cortex.

The video comes from scientific animator Jim Stanis, University of Southern California Mark and Mary Stevens Neuroimaging and Informatics Institute, Los Angeles. He collaborated with Nick Foster, lead author on the Nature paper and a research scientist in the NIH-supported lab of Hong-Wei Dong at the University of California, Los Angeles.

The two worked together to bring to life hundreds of microscopic images of this circuit, known by the unusually long, hyphenated name: the cortico-basal ganglia-thalamic loop. It consists of a series of subcircuits that feed into a larger signaling loop.

The subcircuits in the loop make it possible to connect thinking with movement, helping the brain learn useful sequences of motor activity. The looped subcircuits also allow the brain to perform very complex tasks such as achieving goals (completing a marathon) and adapting to changing circumstances (running uphill or downhill).

Although scientists had long assumed the cortico-basal ganglia-thalamic loop existed and formed a tight, closed loop, they had no real proof. This new research, funded through NIH’s Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative, provides that proof showing anatomically that the nerve cells physically connect, as highlighted in this video. The research also provides electrical proof through tests that show stimulating individual segments activate the others.

Detailed maps of neural circuits are in high demand. That’s what makes results like these so exciting to see. Researchers can now better navigate this key circuit not only in mice but other vertebrates, including humans. Indeed, the cortico-basal ganglia-thalamic loop may be involved in a number of neurological and neuropsychiatric conditions, including Huntington’s disease, Parkinson’s disease, schizophrenia, and addiction. In the meantime, Stanis, Foster, and colleagues have left us with a very cool video to watch.

Reference:

[1] The mouse cortico-basal ganglia-thalamic network. Foster NN, Barry J, Korobkova L, Garcia L, Gao L, Becerra M, Sherafat Y, Peng B, Li X, Choi JH, Gou L, Zingg B, Azam S, Lo D, Khanjani N, Zhang B, Stanis J, Bowman I, Cotter K, Cao C, Yamashita S, Tugangui A, Li A, Jiang T, Jia X, Feng Z, Aquino S, Mun HS, Zhu M, Santarelli A, Benavidez NL, Song M, Dan G, Fayzullina M, Ustrell S, Boesen T, Johnson DL, Xu H, Bienkowski MS, Yang XW, Gong H, Levine MS, Wickersham I, Luo Q, Hahn JD, Lim BK, Zhang LI, Cepeda C, Hintiryan H, Dong HW. Nature. 2021;598(7879):188-194.

Links:

Brain Basics: Know Your Brain (National Institute of Neurological Disorders and Stroke/NIH)

Dong Lab (University of California, Los Angeles)

Mark and Mary Stevens Neuroimaging and Informatics Institute (University of Southern California, Los Angeles)

The Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

NIH Support: Eunice Kennedy Shriver National Institute of Child Health and Human Development; National Institute on Deafness and Other Communication Disorders; National Institute of Mental Health


Tapping Into The Brain’s Primary Motor Cortex

Posted on by Dr. Francis Collins

If you’re like me, you might catch yourself during the day in front of a computer screen mindlessly tapping your fingers. (I always check first to be sure my mute button is on!) But all that tapping isn’t as mindless as you might think.

While a research participant performs a simple motor task, tapping her fingers together, this video shows blood flow within the folds of her brain’s primary motor cortex (gray and white), which controls voluntary movement. Areas of high brain activity (yellow and red) emerge in the omega-shaped “hand-knob” region, the part of the brain controlling hand movement (right of center) and then further back within the primary somatic cortex (which borders the motor cortex toward the back of the head).

About 38 seconds in, the right half of the video screen illustrates that the finger tapping activates both superficial and deep layers of the primary motor cortex. In contrast, the sensation of a hand being brushed (a sensory task) mostly activates superficial layers, where the primary sensory cortex is located. This fits with what we know about the superficial and deep layers of the hand-knob region, since they are responsible for receiving sensory input and generating motor output to control finger movements, respectively [1].

The video showcases a new technology called zoomed 7T perfusion functional MRI (fMRI). It was an entry in the recent Show Us Your BRAINs! Photo and Video Contest, supported by NIH’s Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative.

The technology is under development by an NIH-funded team led by Danny J.J. Wang, University of Southern California Mark and Mary Stevens Neuroimaging and Informatics Institute, Los Angeles. Zoomed 7T perfusion fMRI was developed by Xingfeng Shao and brought to life by the group’s medical animator Jim Stanis.

Measuring brain activity using fMRI to track perfusion is not new. The brain needs a lot of oxygen, carried to it by arteries running throughout the head, to carry out its many complex functions. Given the importance of oxygen to the brain, you can think of perfusion levels, measured by fMRI, as a stand-in measure for neural activity.

There are two things that are new about zoomed 7T perfusion fMRI. For one, it uses the first ultrahigh magnetic field imaging scanner approved by the Food and Drug Administration. The technology also has high sensitivity for detecting blood flow changes in tiny arteries and capillaries throughout the many layers of the cortex [2].

Compared to previous MRI methods with weaker magnets, the new technique can measure blood flow on a fine-grained scale, enabling scientists to remove unwanted signals (“noise”) such as those from surface-level arteries and veins. Getting an accurate read-out of activity from region to region across cortical layers can help scientists understand human brain function in greater detail in health and disease.

Having shown that the technology works as expected during relatively mundane hand movements, Wang and his team are now developing the approach for fine-grained 3D mapping of brain activity throughout the many layers of the brain. This type of analysis, known as mesoscale mapping, is key to understanding dynamic activities of neural circuits that connect brain cells across cortical layers and among brain regions.

Decoding circuits, and ultimately rewiring them, is a major goal of NIH’s BRAIN Initiative. Zoomed 7T perfusion fMRI gives us a window into 4D biology, which is the ability to watch 3D objects over time scales in which life happens, whether it’s playing an elaborate drum roll or just tapping your fingers.

References:

[1] Neuroanatomical localization of the ‘precentral knob’ with computed tomography imaging. Park MC, Goldman MA, Park MJ, Friehs GM. Stereotact Funct Neurosurg. 2007;85(4):158-61.

[2]. Laminar perfusion imaging with zoomed arterial spin labeling at 7 Tesla. Shao X, Guo F, Shou Q, Wang K, Jann K, Yan L, Toga AW, Zhang P, Wang D.J.J bioRxiv 2021.04.13.439689.

Links:

Brain Basics: Know Your Brain (National Institute of Neurological Disorders and Stroke)

Laboratory of Functional MRI Technology (University of Southern California Mark and Mary Stevens Neuroimaging and Informatics Institute)

The Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

Show Us Your BRAINs! Photo and Video Contest (BRAIN Initiative)

NIH Support: National Institute of Neurological Disorders and Stroke; National Institute of Biomedical Imaging and Bioengineering; Office of the Director


New Microscope Technique Provides Real-Time 3D Views

Posted on by Dr. Francis Collins

Most of the “cool” videos shared on my blog are borne of countless hours behind a microscope. Researchers must move a biological sample through a microscope’s focus, slowly acquiring hundreds of high-res 2D snapshots, one painstaking snap at a time. Afterwards, sophisticated computer software takes this ordered “stack” of images, calculates how the object would look from different perspectives, and later displays them as 3D views of life that can be streamed as short videos.

But this video is different. It was created by what’s called a multi-angle projection imaging system. This new optical device requires just a few camera snapshots and two mirrors to image a biological sample from multiple angles at once. Because the device eliminates the time-consuming process of acquiring individual image slices, it’s up to 100 times faster than current technologies and doesn’t require computer software to construct the movie. The kicker is that the video can be displayed in real time, which isn’t possible with existing image-stacking methods.

The video here shows two human melanoma cells, rotating several times between overhead and side views. You can see large amounts of the protein PI3K (brighter orange hues indicate higher concentrations), which helps some cancer cells divide and move around. Near the cell’s perimeter are small, dynamic surface protrusions. PI3K in these “blebs” is thought to help tumor cells navigate and survive in foreign tissues as the tumor spreads to other organs, a process known as metastasis.

The new multi-angle projection imaging system optical device was described in a paper published recently in the journal Nature Methods [1]. It was created by Reto Fiolka and Kevin Dean at the University of Texas Southwestern Medical Center, Dallas.

Like most technology, this device is complicated. Rather than the microscope and camera doing all the work, as is customary, two mirrors within the microscope play a starring role. During a camera exposure, these mirrors rotate ever so slightly and warp the acquired image in such a way that successive, unique perspectives of the sample magically come into view. By changing the amount of warp, the sample appears to rotate in real-time. As such, each view shown in the video requires only one camera snapshot, instead of acquiring hundreds of slices in a conventional scheme.

The concept traces to computer science and an algorithm called the shear warp transform method. It’s used to observe 3D objects from different perspectives on a 2D computer monitor. Fiolka, Dean, and team found they could implement a similar algorithm optically for use with a microscope. What’s more, their multi-angle projection imaging system is easy-to-use, inexpensive, and can be converted for use on any camera-based microscope.

The researchers have used the device to view samples spanning a range of sizes: from mitochondria and other tiny organelles inside cells to the beating heart of a young zebrafish. And, as the video shows, it has been applied to study cancer and other human diseases.

In a neat, but also scientifically valuable twist, the new optical method can generate a virtual reality view of a sample. Any microscope user wearing the appropriately colored 3D glasses immediately sees the objects.

While virtual reality viewing of cellular life might sound like a gimmick, Fiolka and Dean believe that it will help researchers use their current microscopes to see any sample in 3D—offering the chance to find rare and potentially important biological events much faster than is possible with even the most advanced microscopes today.

Fiolka, Dean, and team are still just getting started. Because the method analyzes tissue very quickly within a single image frame, they say it will enable scientists to observe the fastest events in biology, such as the movement of calcium throughout a neuron—or even a whole bundle of neurons at once. For neuroscientists trying to understand the brain, that’s a movie they will really want to see.

Reference:

[1] Real-time multi-angle projection imaging of biological dynamics. Chang BJ, Manton JD, Sapoznik E, Pohlkamp T, Terrones TS, Welf ES, Murali VS, Roudot P, Hake K, Whitehead L, York AG, Dean KM, Fiolka R. Nat Methods. 2021 Jul;18(7):829-834.

Links:

Metastatic Cancer: When Cancer Spreads (National Cancer Institute)

Fiolka Lab (University of Texas Southwestern Medical Center, Dallas)

Dean Lab (University of Texas Southwestern)

Microscopy Innovation Lab (University of Texas Southwestern)

NIH Support: National Cancer Institute; National Institute of General Medical Sciences


Immune Macrophages Use Their Own ‘Morse Code’

Posted on by Dr. Francis Collins

Credit: Hoffmann Lab, UCLA

In the language of Morse code, the letter “S” is three short sounds and the letter “O” is three longer sounds. Put them together in the right order and you have a cry for help: S.O.S. Now an NIH-funded team of researchers has cracked a comparable code that specialized immune cells called macrophages use to signal and respond to a threat.

In fact, by “listening in” on thousands of macrophages over time, one by one, the researchers have identified not just a lone distress signal, or “word,” but a vocabulary of six words. Their studies show that macrophages use these six words at different times to launch an appropriate response. What’s more, they have evidence that autoimmune conditions can arise when immune cells misuse certain words in this vocabulary. This bad communication can cause them incorrectly to attack substances produced by the immune system itself as if they were a foreign invaders.

The findings, published recently in the journal Immunity, come from a University of California, Los Angeles (UCLA) team led by Alexander Hoffmann and Adewunmi Adelaja. As an example of this language of immunity, the video above shows in both frames many immune macrophages (blue and red). You may need to watch the video four times to see what’s happening (I did). Each time you run the video, focus on one of the highlighted cells (outlined in white or green), and note how its nuclear signal intensity varies over time. That signal intensity is plotted in the rectangular box at the bottom.

The macrophages come from a mouse engineered in such a way that cells throughout its body light up to reveal the internal dynamics of an important immune signaling protein called nuclear NFκB. With the cells illuminated, the researchers could watch, or “listen in,” on this important immune signal within hundreds of individual macrophages over time to attempt to recognize and begin to interpret potentially meaningful patterns.

On the left side, macrophages are responding to an immune activating molecule called TNF. On the right, they’re responding to a bacterial toxin called LPS. While the researchers could listen to hundreds of cells at once, in the video they’ve randomly selected two cells (outlined in white or green) on each side to focus on in this example.

As shown in the box in the lower portion of each frame, the cells didn’t respond in precisely the same way to the same threat, just like two people might pronounce the same word slightly differently. But their responses nevertheless show distinct and recognizable patterns. Each of those distinct patterns could be decomposed into six code words. Together these six code words serve as a previously unrecognized immune language!

Overall, the researchers analyzed how more than 12,000 macrophage cells communicated in response to 27 different immune threats. Based on the possible arrangement of temporal nuclear NFκB dynamics, they then generated a list of more than 900 pattern features that could be potential “code words.”

Using an algorithm developed decades ago for the telecommunications industry, they then monitored which of the potential words showed up reliably when macrophages responded to a particular threatening stimulus, such as a bacterial or viral toxin. This narrowed their list to six specific features, or “words,” that correlated with a particular response.

To confirm that these pattern features contained meaning, the team turned to machine learning. If they taught a computer just those six words, they asked, could it distinguish the external threats to which the computerized cells were responding? The answer was yes.

But what if the computer had five words available, instead of six? The researchers found that the computer made more mistakes in recognizing the stimulus, leading the team to conclude that all six words are indeed needed for reliable cellular communication.

To begin to explore the implications of their findings for understanding autoimmune diseases, the researchers conducted similar studies in macrophages from a mouse model of Sjögren’s syndrome, a systemic condition in which the immune system often misguidedly attacks cells that produce saliva and tears. When they listened in on these cells, they found that they used two of the six words incorrectly. As a result, they activated the wrong responses, causing the body to mistakenly perceive a serious threat and attack itself.

While previous studies have proposed that immune cells employ a language, this is the first to identify words in that language, and to show what can happen when those words are misused. Now that researchers have a list of words, the next step is to figure out their precise definitions and interpretations [2] and, ultimately, how their misuse may be corrected to treat immunological diseases.

References:

[1] Six distinct NFκB signaling codons convey discrete information to distinguish stimuli and enable appropriate macrophage responses. Adelaja A, Taylor B, Sheu KM, Liu Y, Luecke S, Hoffmann A. Immunity. 2021 May 11;54(5):916-930.e7.

[2] NF-κB dynamics determine the stimulus specificity of epigenomic reprogramming in macrophages. Cheng QJ, Ohta S, Sheu KM, Spreafico R, Adelaja A, Taylor B, Hoffmann A. Science. 2021 Jun 18;372(6548):1349-1353.

Links:

Overview of the Immune System (National Institute of Allergy and Infectious Diseases/NIH)

Sjögren’s Syndrome (National Institute of Dental and Craniofacial Research/NIH)

Alexander Hoffmann (UCLA)

NIH Support: National Institute of General Medical Sciences; National Institute of Allergy and Infectious Diseases


Learning from History: Fauci Donates Model to Smithsonian’s COVID-19 Collection

Posted on by Dr. Francis Collins

Not too long after the global coronavirus disease 2019 (COVID-19) pandemic reached the United States, museum curators began collecting material to document the history of this devastating public health crisis and our nation’s response to it. To help tell this story, the Smithsonian Institution’s National Museum of American History recently scored a donation from my friend and colleague Dr. Anthony Fauci, Director of NIH’s National Institute of Allergy and Infectious Diseases.

Widely recognized for serving as a clear voice for science throughout the pandemic, Fauci gave the museum his much-used model of SARS-CoV-2, which is the coronavirus that causes COVID-19. This model, which is based on work conducted by NIH-supported electron microscopists and structural biologists, was 3D printed right here at NIH. By the way, I’m lucky enough to have one too.

Both of these models have “met” an amazing array of people—from presidents to congresspeople to journalists to average citizens—as part of our efforts to help folks understand SARS-CoV-2 and the crucial role of its surface spike proteins. As shown in this brief video, Fauci raised his model one last time and then, ever the public ambassador for science, turned his virtual donation into a memorable teaching moment. I recommend you take a minute or two to watch it.

The donation took place during a virtual ceremony in which the National Museum of American History awarded Fauci its prestigious Great Americans Medal. He received the award for his lifetime contributions to the nation’s ideals and for making a lasting impact on public health via his many philanthropic and humanitarian efforts. Fauci joined an impressive list of luminaries in receiving this honor, including former Secretaries of State Madeleine Albright and General Colin Powell; journalist Tom Brokaw; baseball great Cal Ripken Jr.; tennis star Billie Jean King; and musician Paul Simon. It’s a well-deserved honor for a physician-scientist who’s advised seven presidents on a range of domestic and global health issues, from HIV/AIDS to Ebola to COVID-19.

With Fauci’s model now enshrined as an official piece of U.S. history, the Smithsonian and other museums around the world are stepping up their efforts to gather additional artifacts related to COVID-19 and to chronicle its impacts on the health and economy of our nation. Hopefully, future generations will learn from this history so that humankind is not doomed to repeat it.

It is interesting to note that the National Museum of American History’s collection contains few artifacts from another tragic chapter in our nation’s past: the 1918 Influenza Pandemic. One reason this pandemic went largely undocumented is that, like so many of their fellow citizens, curators chose to overlook its devastating impacts and instead turn toward the future.

Multi-colored artificial flowers
An NIH staff member created these paper flowers from the stickers received over the past several months each time he was screened for COVID-19 at the NIH Clinical Center. Credit: Office of NIH History and Stetten Museum

Today, museum staffers across the country and around the world are stepping up to the challenge of documenting COVID-19’s history with great creativity, collecting all variety of masks, test kits, vaccine vials, and even a few ventilators. At the NIH’s main campus in Bethesda, MD, the Office of NIH History and Stetten Museum is busy preparing a small exhibit of scientific and clinical artifacts that could open as early as the summer of 2021. The museum is also collecting oral histories as part of its “Behind the Mask” project. So far, more than 50 interviews have been conducted with NIH staff, including a scientist who’s helping the hard-hit Navajo Nation during the pandemic; a Clinical Center nurse who’s treating patients with COVID-19, and a mental health professional who’s had to change expectations since the outbreak.

The pandemic isn’t over yet. All of us need to do our part by getting vaccinated against COVID-19 and taking other precautions to prevent the virus’s deadly spread. But won’t it great when—hopefully, one day soon—we can relegate this terrible pandemic to the museums and the history books!

Links:

COVID-19 Research (NIH)

Video: National Museum of American History Presents The Great Americans Medal to Anthony S. Fauci (Smithsonian Institution, Washington, D.C.)

National Museum of American History (Smithsonian)

The Office of NIH History and Stetten Museum (NIH)



Using R2D2 to Understand RNA Folding

Posted on by Dr. Francis Collins

If you love learning more about biology at a fundamental level, I have a great video for you! It simulates the 3D folding of RNA. RNA is a single stranded molecule, but it is still capable of forming internal loops that can be stabilized by base pairing, just like its famously double-stranded parent, DNA. Understanding more about RNA folding may be valuable in many different areas of biomedical research, including developing ways to help people with RNA-related diseases, such as certain cancers and neuromuscular disorders, and designing better mRNA vaccines against infectious disease threats (like COVID-19).

Because RNA folding starts even while an RNA is still being made in the cell, the process has proven hugely challenging to follow closely. An innovative solution, shown in this video, comes from the labs of NIH grantees Julius Lucks, Northwestern University, Evanston, IL, and Alan Chen, State University of New York at Albany. The team, led by graduate student Angela Yu and including several diehard Star Wars fans, realized that to visualize RNA folding they needed a technology platform that, like a Star Wars droid, is able to “see” things that others can’t. So, they created R2D2, which is short for Reconstructing RNA Dynamics from Data.

What’s so groundbreaking about the R2D2 approach, which was published recently in Molecular Cell, is that it combines experimental data on RNA folding at the nucleotide level with predictive algorithms at the atomic level to simulate RNA folding in ultra-slow motion [1]. While other computer simulations have been available for decades, they have lacked much-needed experimental data of this complex folding process to confirm their mathematical modeling.

As a gene is transcribed into RNA one building block, or nucleotide, at a time, the elongating RNA strand folds immediately before the whole molecule is fully assembled. But such folding can create a problem: the new strand can tie itself up into a knot-like structure that’s incompatible with the shape it needs to function in a cell.

To slip this knot, the cell has evolved immediate corrective pathways, or countermoves. In this R2D2 video, you can see one countermove called a toehold-mediated strand displacement. In this example, the maneuver is performed by an ancient molecule called a single recognition particle (SRP) RNA. Though SRP RNAs are found in all forms of life, this one comes from the bacterium Escherichia coli and is made up of 114 nucleotides.

The colors in this video highlight different domains of the RNA molecule, all at different stages in the folding process. Some (orange, turquoise) have already folded properly, while another domain (dark purple) is temporarily knotted. For this knotted domain to slip its knot, about 5 seconds into the video, another newly forming region (fuchsia) wiggles down to gain a “toehold.” About 9 seconds in, the temporarily knotted domain untangles and unwinds, and, finally, at about 23 seconds, the strand starts to get reconfigured into the shape it needs to do its job in the cell.

Why would evolution favor such a seemingly inefficient folding process? Well, it might not be inefficient as it first appears. In fact, as Chen noted, some nanotechnologists previously invented toehold displacement as a design principle for generating synthetic DNA and RNA circuits. Little did they know that nature may have scooped them many millennia ago!

Reference:

[1] Computationally reconstructing cotranscriptional RNA folding from experimental data reveals rearrangement of non-naïve folding intermediates. Yu AM, Gasper PM Cheng L, Chen AA, Lucks JB, et. al. Molecular Cell 8, 1-14. 18 February 2021.

Links:

Ribonucleic Acid (RNA) (National Human Genome Research Institute/NIH)

Chen Lab (State University of New York at Albany)

Lucks Laboratory (Northwestern University, Evanston IL)

NIH Support: National Institute of General Medical Sciences; Common Fund


See the Human Cardiovascular System in a Whole New Way

Posted on by Dr. Francis Collins

Watch this brief video and you might guess you’re seeing an animated line drawing, gradually revealing a delicate take on a familiar system: the internal structures of the human body. But this movie doesn’t capture the work of a talented sketch artist. It was created using the first 3D, full-body imaging device using positron emission tomography (PET).

The device is called an EXPLORER (EXtreme Performance LOng axial REsearch scanneR) total-body PET scanner. By pairing this scanner with an advanced method for reconstructing images from vast quantities of data, the researchers can make movies.

For this movie in particular, the researchers injected small amounts of a short-lived radioactive tracer—an essential component of all PET scans—into the lower leg of a study volunteer. They then sat back as the scanner captured images of the tracer moving up the leg and into the body, where it enters the heart. The tracer moves through the heart’s right ventricle to the lungs, back through the left ventricle, and up to the brain. Keep watching, and, near the 30-second mark, you will see in closer focus a haunting capture of the beating heart.

This groundbreaking scanner was developed and tested by Jinyi Qi, Simon Cherry, Ramsey Badawi, and their colleagues at the University of California, Davis [1]. As the NIH-funded researchers reported recently in Proceedings of the National Academy of Sciences, their new scanner can capture dynamic changes in the body that take place in a tenth of a second [2]. That’s faster than the blink of an eye!

This movie is composed of frames captured at 0.1-second intervals. It highlights a feature that makes this scanner so unique: its ability to visualize the whole body at once. Other medical imaging methods, including MRI, CT, and traditional PET scans, can be used to capture beautiful images of the heart or the brain, for example. But they can’t show what’s happening in the heart and brain at the same time.

The ability to capture the dynamics of radioactive tracers in multiple organs at once opens a new window into human biology. For example, the EXPLORER system makes it possible to measure inflammation that occurs in many parts of the body after a heart attack, as well as to study interactions between the brain and gut in Parkinson’s disease and other disorders.

EXPLORER also offers other advantages. It’s extra sensitive, which enables it to capture images other scanners would miss—and with a lower dose of radiation. It’s also much faster than a regular PET scanner, making it especially useful for imaging wiggly kids. And it expands the realm of research possibilities for PET imaging studies. For instance, researchers might repeatedly image a person with arthritis over time to observe changes that may be related to treatments or exercise.

Currently, the UC Davis team is working with colleagues at the University of California, San Francisco to use EXPLORER to enhance our understanding of HIV infection. Their preliminary findings show that the scanner makes it easier to capture where the human immunodeficiency virus (HIV), the cause of AIDS, is lurking in the body by picking up on signals too weak to be seen on traditional PET scans.

While the research potential for this scanner is clearly vast, it also holds promise for clinical use. In fact, a commercial version of the scanner, called uEXPLORER, has been approved by the FDA and is in use at UC Davis [3]. The researchers have found that its improved sensitivity makes it much easier to detect cancers in patients who are obese and, therefore, harder to image well using traditional PET scanners.

As soon as the COVID-19 outbreak subsides enough to allow clinical research to resume, the researchers say they’ll begin recruiting patients with cancer into a clinical study designed to compare traditional PET and EXPLORER scans directly.

As these researchers, and other researchers around the world, begin to put this new scanner to use, we can look forward to seeing many more remarkable movies like this one. Imagine what they will reveal!

References:

[1] First human imaging studies with the EXPLORER total-body PET scanner. Badawi RD, Shi H, Hu P, Chen S, Xu T, Price PM, Ding Y, Spencer BA, Nardo L, Liu W, Bao J, Jones T, Li H, Cherry SR. J Nucl Med. 2019 Mar;60(3):299-303.

[2] Subsecond total-body imaging using ultrasensitive positron emission tomography. Zhang X, Cherry SR, Xie Z, Shi H, Badawi RD, Qi J. Proc Natl Acad Sci U S A. 2020 Feb 4;117(5):2265-2267.

[3] “United Imaging Healthcare uEXPLORER Total-body Scanner Cleared by FDA, Available in U.S. Early 2019.” Cision PR Newswire. January 22, 2019.

Links:

Positron Emission Tomography (PET) (NIH Clinical Center)

EXPLORER Total-Body PET Scanner (University of California, Davis)

Cherry Lab (UC Davis)

Badawi Lab (UC Davis Medical Center, Sacramento)

NIH Support: National Cancer Institute; National Institute of Biomedical Imaging and Bioengineering; Common Fund


Next Page