Updated: Aug 22, 2018
By Conor Dickson
Throughout the history of learning and memory research, it seemed as though the future promised the exposition of the engram. The engram is defined as the physical manifestation of a memory. The concept was first introduced in The Mneme by Richard Semon in 1921. Semon provided conceptual guidelines for the engram; stating that it must, at once, exist as a representation of an event encoded at the time of occurrence.
It must also exist independently from the phenomenology, or the conscious remembrance, of memory; existing in a dispositional state with the potential to be recalled through interactions with retrieval cues. Although many attempts were made to localize and describe engrams, its exposition remained largely elusive for nearly the entire twentieth century.
The discovery of the molecular mechanisms of long term potentiation, the increased excitability between neurons, provided the first insights into the nature of memory itself. Long term potentiation, or LTP, was immediately recognized as a phenomenon capable of encoding information in real time. LTP fulfilled much of Semon’s prerequisites for the engram; the persistent change in brain states resulting from an experience or event. However, even though the induction of LTP is a means of encoding information, memories aren’t found in excitable synapses. Memories exist in circuits, where specific populations of neurons fire across several regions. Unlike a filing cabinet with manila folders, human memory is composed of widely distributed networks of neural actors. The storage of a memory depends on its context and the way in which it was experienced. For example, a visual memory is partially composed of neurons in the visual cortex while auditory memory is partially composed of neurons in the auditory cortex.
With the aid of increasingly precise measures of cellular activity and genetic manipulation, the most elusive features of the engram have become more attainable than ever. In 2012, researchers were able to selectively manipulate fear engrams using genetically altered mice containing a gene for a specialized light sensitive membrane protein. In order to isolate specific neurons involved in a memory, the membrane opsin proteins were only expressed in the neurons that were active during a period of learning. These membrane channels have the effect of either exciting or depressing the activity of the neuron. Therefore, once expressed, researchers were able to activate the cells of a memory with the application of light on the region of interest. After being conditioned in an unpleasant environment while expressing the membrane protein, the same mice would clearly demonstrated behavior of recalling the negative event when the cells of the engram were artificially excited. Many subsequent studies have since replicated these findings and even produced false memories in which a mouse falsely associates an environment with an unpleasant stimulus. With these breakthroughs, the study of memory has made a transition to a new tier of research. That is, from a descriptive science of observing behavior to a predictive endeavor in which memory can be captured and manipulated.
However, there is still much to learn about memory. Memories don’t exist as islands of information as they are often treated in fear association studies. The retrieval of a memory is often induced by cues that result in cascades of memory retrievals. Fear association studies involve very little behavioral complexity and therefore cannot address the specific content of the retrieved memories. The minutia of declarative memories, such as the perceptual reexperience of an event, are unavailable to studies that simply look for associative behavior. Additionally, specific cues must be present to induce a memory, yet these studies indiscriminately activate each neuron containing the opsin protein. While the artificially stimulated neurons elicit activity, those same neurons exhibit greater activity during natural memory recall, meaning that there are still intrinsic differences between the two processes. One possible cause of this difference between natural and artificial retrieval is caused by the incoherent temporal and spatial aspects of artificial stimulation. Memory retrieval requires activation of specific neurons in an ordered sequence, a process which has yet to be replicable.
As these limitations become more apparent, their precedence will continue increase. Brain research will slowly transition from a descriptive to a predictive science, which means that it will not be enough to indiscriminately stimulate or inhibit one brain region or another. It is essential that the transition be accompanied by more delicate techniques and theory that appreciate the brain for what it is; more complex than we imagine. Previous pursuits of the engram failed to realize this fact and subsequently failed to realize the distance between current research and an adequate description of memory. In spite of the long process that has provided our current levels of understanding, exposition of the engram seems not far off. Optogenetic techniques are improving as is electrode technology (see neuropixels). Scientific progress will be made in describing the nature of human memory, and the greatest constraint is time.
Josselyn, S. A., Köhler, S., & Frankland, P. W. (2015). Finding the engram. Nature Reviews Neuroscience, 16(9), 521–534. https://doi.org/10.1038/nrn4000
Liu, X., Ramirez, S., Pang, P. T., Puryear, C. B., Govindarajan, A., Deisseroth, K., & Tonegawa, S. (2012). Optogenetic stimulation of a hippocampal engram activates fear memory recall. Nature, 484(7394), 381–385. https://doi.org/10.1038/nature11028
Reijmers, L. G., Perkins, B. L., Matsuo, N., & Mayford, M. (2007). Localization of a Stable Neural Correlate of Associative Memory. Science, 317(5842), 1230–1233. https://doi.org/10.1126/science.1143839
17 views0 comments