7.2 Sensory perception

7.2 Sensory perception


When we first think about
sensory perception, it seems trivial. We look at the world around us,
and we see objects. We listen and we hear sounds. We sniff, and we smell fragrances. And it all fits together in one
unified world view: our perceived reality. However, when we start thinking about the underlying neuronal mechanisms, it turns out that we’re only
just beginning to get our first clues as to how sensory perception works. Sensory perception is an active process
in at least two different ways. Firstly, motor control is very important
for sensory perception. We actively gather sensory information. And that’s what we discussed
in the last video. We make eye movements
and head movements to see specific parts
of the visual field. And we move our hands
and fingers to touch objects, actively get tactile
sensory information. It turns out that sensory perception is active in another way
of thinking also, in which our neuronal activity
actively generates our sensory percepts. That sensory percepts are subjective, can easily be seen by
looking at images like this that evoke bistable percepts. We have one set of photons
that fall upon the retina, but if you stare at
this image for long enough, you’ll see two alternating percepts. Either you see the two white faces
looking at each other, with their noses almost touching, or you see the black cup
in the middle. And it’s one set of light
that falls on the retina, but the brain generates
two distinct percepts from it. Clearly, sensory percepts are
internal constructs that are generated by
neuronal activity. And right now,
neuroscientists think that it’s largely the activity
of neurons in the neocortex that generate our sensory percepts. And so for that causal
and mechanistic insight into how sensory percepts occur, we need to look at the activity
of individual nerve cells and how they talk to each other, perhaps with a particular focus
upon what’s going on in the neocortex. So sensory percepts are not just
out there in the world, but they’re actively generated
by the internal activity of our brain. Sensory percepts are also learned
through experience. And it’s because we’ve seen
many silhouettes before, that we recognize this as
a silhouette of a human. And so the reason
we recognize objects is because we’ve seen many
closely related images before, images of faces and people
from many different angles. And our brain has managed
to extract the appropriate statistics that associate
that information. And so, as we are all aware of
from watching young babies, we don’t immediately know
how to perceive the world, but we learn to see
the world around us. And the same is true of course,
for all of our other sensory inputs. So how are we going to investigate
subjective, learned, sensory percepts? Well, we clearly need to be
able to deliver well controlled sensory stiumuli. That’s then going to evoke
neuronal computations inside the brain. And those are going to depend
upon context, learning, and also the goals
of the particular moment. And if we’re going to experimentally
investigate the subjective percept, then we also need the subject
to report what it is that it’s feeling. And so we need to have a motor output where behavior reports
the subject to percept. And so if this were a human
psychophysics experiment, we would have two buttons. One that the human observer
would push if he sees two faces, and another button
if he sees the black cup. And he would then
be pushing in alternation these two buttons as a percept flickered
from one to the other. And so, an important concept
in terms of experimental investigation, is that sensory percepts must be
reported through motor ouput. And we might consider this, all together,
as a learned, goal-directed, sensory motor transformation. And that then, might represent
the minimal, essential core feature of sensory perception, that it has to form a learned, perhaps relatively abstract
sensory motor transform. Now, sensory perception
is rather complicated, and it might be a good idea
to start with very simple forms of sensory perception, if we want to get to the detailed,
mechanistic and causal insight into how it actually works. The simplest form
of sensory perception is detection. Simply to say, is there a stimulus or not? And that’s what we’re
going to investigate here, in the whisker sensory system. We implant metal head holders
onto the head of mice, so we can head-restrain them, make recording chambers
that allows us to do electro-geological and imaging
experiments at high resolution. The animal sits inside a cardboard roll where it feels relatively comfortable, and in addition, we have
a plate here in front that restrains the movement
of its forepaw, so it can’t for example,
scratch its nose. Onto the whisker we place
a small piece of metal. And the whole animal site on top
of a electromagnetic coil. We can then drive current pulses
through that electromagnetic coil that generates magnetic fields, and those magnetic fields,
act upon the metal particle and cause a force to be applied. And we can make these
magnetic pulses relatively short, about one millisecond in duration, and so a one millisecond impulse
is applied to the whisker, and that occurs at a random time. The animal has no cue to tell us
when that stimulus is coming. So the animal is waiting,
waiting, waiting, and every now and then, there’s a brief one millisecond
pulse on the whisker, and what the animal has
to learn in this behavior, is that that one millisecond
whisker stimulus means that reward is available
from this licking spout. And if it licks immediately
after the whisker stimulation, then it gets a droplet
of liquid reward. And we monitor the tongue
contacting the spout here through so called piezo film, that then gives us
a little electrical signal when the tongue touches the spout. And we can threshold that,
and use that to open a valve and deliver liquid water. And so for the trials
in which we stimulate, there are two different outcomes. If the animal licks
during the reward window, which we set at one second, then we open the valve
and the animal gets a reward. On other trials,
we give the same stimulus but the animal fails to lick. And this is then a miss trial
and of course, there’s no reward that’s delivered. And so as the first indicator,
we can quantify performance as a so called hit rate– the fraction of trials
where the animal licks, compared to the total number of trials
where we deliver a stimulus. We can also look at another trial type
where we don’t stimulate, and sometimes the animal
will spontaneously lick. We can think of those
as false alarm trials. And these might also be
very interesting to analyze. Perhaps the animal is dreaming
of a whisker stimulus being delivered. And something
in the neuronal activity then makes it lick. And we can think of other trials
where we don’t stimulate– the animal doesn’t lick, and those are then
correct projection trials. On the first day, when the animal
goes into the behavioral apparatus, the animal knows
that licking is a good thing, that it sometimes gets
reward when it licks, but it doesn’t know when. It has no idea that the whisker
stimulation is a cue to initiate licking, and that when there’s no stimulation, there’s no reward that’s
going to happen. And so it licks just as much
on stimulation trials, as on no stimulation trials. Through trial and error
and reward-based learning, the animal learns that licking
after a stimulus is a good thing, and it begins to pick up
something like 70 to 80 percent of the trials in which we stimulate,
and licks in response to that stimulus. It also learns that when
there’s no stimulation, there’s no point in licking. And so it learns to suppress the licking
in the absence of a stimulus. And it ‘s really the divergence
of these two curves, that tells us that the animal
has learned the behavior. We can also look at
the psychophysics of this behavior where we plot the stimulus strength
against performance of the animal. And as we turn the stimulus strength down, the fractional trials in which
the animal perceives a stimulus and gives a motor output
in licking, goes down. And somewhere here,
we’re at about half way in performance, and that’s a so called
psychophysical threshold. As we decrease the stimulus even further, there’s no licking above
the false alarm rate, and so here the stimulus
is undetectable. And so this s a standard
psychometric function for any given behavior. Now, we’re interested in
the neurobiology of this behavior. What’s the neuronal activity
that causally drives this learned, goal-directed
sensory motor transformation? We’ve already looked a bit
at the sensory pathway where whisker deflection causes
depolarization of sensory nerve terminals, and action potential firing down
the trigeminal nerve, information crosses the glutamatergic
synapses in the brain stem, and action potential takes
information to the thalamus, and other glutamatergic synapse
then brings the thalamocortical action potential to the neocortex in the primary somatosensory cortex. And then in the primary
somatosensory cortex, we have this exquisite
one-to-one mapping of the sensory periphery
where each whisker is individually represented
in an anatomical unit, a so called barrel. And so if we put sensory
information here on the C2 whisker, we might then expect
that if the neocortex is involved, that the C2 barrel column
might be a key area for where this behavior
might depend upon. So we can test that by
inactivating this part of the brain, and see if that makes any impact
upon behavioral performance. So here we have the animal
sitting in the behavioral apparatus, it’s performing the task, it’s picking up something like
80 percent of the stimuli, and licking in response to them, and at sometime we inject tetrodotoxin
into the C2 barrel column of the mouse barrel cortex. And that causes a dramatic and rapid
decrement in behavioral performance. TTX is a block of voltage-gated
sodium channels. It prevents action potential firing. And so here we’ve blocked
action potential firing, in the primary somatosensory cortex,
and that’s caused a deficit in behavior. So apparently, action potential firing
in the C2 barrel column, is essential in order
to perform this task. We can inject other agents
like CNQX and APV to block ampa and NMDA receptors. And we see that there’s a similar
deficit in performance. And so apparently, glutamatergic
synaptic transmission, is also essential
to carry out this behavior. As control experiments,
we can inject the ring of solution that dissolves these agents
and that has no impact. Or we can inject the tetrodotoxin toxin
into another brain area here, into the primary somatosensory
forepaw representation about a millimeter or so away
from the C2 barrel column. And that also doesn’t cause a deficit. So there’s some degree of specificity
to these pharmacological inactivations. And it seems that action potential firing
and glutamatergic synaptic transmission are necessary in order to perform
the detection task. We can then see what happens
in the other direction. Can we substitute the whisker stimulation by directly stimulating
this area of the brain? And we can do that by
expressing channelrhodopsin through injection of viruses
that expressed channelrhodopsin here, coupled with a fluorescent protein, so we can see where the area
of the brain that’s been injected. Here you see it in post-mortem sections, and there’s a small
localized area of the brain that expresses channelrhodopsin, and that of course corresponds
to the C2 barrel column. And here through mass genetics,
we’ve expressed the channelrhodopsin specifically in excitatory neurons. After injection of the virus,
it takes a few weeks before the channelrhodopsin expresses. And during that time,
we can train the animal, in the standard whisker learning tasks. And so here the animal
never sees blue light flashes and we just train it exactly as before, we stimulate the whisker,
and the animal learns to lick in response. Then, once the animal performs
this task at a high rate, then on a given transfer test day, we ask if we now–
instead of stimulating the whisker, we now make an optigenetics
stimulation of cortex, what happens? Does the animal interpret that stimulus
as the same as a whisker stimulation? Does it perceive the same? Does that generate
the licking motor output? And remarkably, in most of the animals
in which we tested, the very first, five millisecond
blue light flash drove licking behavior. Here’s the light flash. Here’s what we’re seeing
on the piezo trace– the animal’s tongue contacts the spout, Here we open the valve
and the animal is now enjoying its reward. This turned out to be
a robust phenomenon, where we could replace
the sensory peripheral stimulus of whisker with a light stimulation
directed at the neocortex. And so we can directly replace
sensory stimuli at the periphery by internal stimulation
of neurons in the neocortex. We can also examine the learning
in the opposite direction. We can train animals first,
in the optigenetic detection task, where we now deliver
five millisecond blue light flashes to stimulate excitatory neurons
in the S1 cortex, the animal learns to lick
in response to the S1 stimulation, and after the animals learns that, we then give the very first
whisker stimuli. And we then ask,
does the animal now interpret that stimulus in the same way, and transform that into a goal-directed,
licking motor output? And again, the answer is “yes”. We stimulate the whisker,
the animal licks, and the replacement from
optigenetics stimulation to a real peripheral stimulation
is extremely good. And so it seems we can optigenetically
program the behavior from S1 cortex. And so these data then tell us
that activity in S1 cortex is both necessary and sufficient
in order to generate the perception of a whisker stimulus
that then gets transformed presumably by downstream brain areas, into a licking motor ouput
in order to obtain a reward. It’s therefore clearly interesting
to find out what types of neuronal activity are taking place
in the C2 barrel column during task execution. And here we’re making
membrane potential recordings from layer two, three parameter neurons
in the C2 barrel column. We stimulate the whisker, and this upper trace
is from a hit trial, the animal licks, it gets a reward, and so by definition,
this is a hit trial. If we now look at
the membrane potential trace, shortly after stimulation we see
there is a depolarizing sensory response that is presumably
the sensory response driven by the feet forward thalamocortical circuits then at a later time,
there’s a late depolarization here, with a couple of action potentials
riding on top of it, and that late depolarization,
nonetheless precedes licking by hundreds of milliseconds. And so there could be
a causal relationship between late depolarization
and driving the licking motor output. Down below we see a miss trial. We give the same identical stimulus, it drives a depolarizing
sensory response, but the late component here
seems to be less obvious, and of course, there’s
no licking and no reward. It’s a miss trial. These are just two individual trials. But they turn out to be representative
of the average trials for this particular recording. And so here we take
all the hit trials together and average them into
the black trace. We take all the miss trials,
average them together into the red trace, and here at time and stimulus,
you see that both hit and miss trials generate large, obvious
sensory responses, and there’s no obvious difference
between the hit and the miss trial. So there’s a early, reliable
sensory response And that’s probably a good thing
when primary somatosensory cortex and so you’d expect there to be
a good, reliable representation of the periphery. At later times these two traces diverge, with depolarization being more prominent
on hit trials than miss trials. And that’s accompanied by
enhanced action potential firing on hit trials compared to miss trials, and all of that precedes the licking
by some hundredths of milliseconds. So here in this late
component of activity we have something that correlates
with the subject of percept of the animal, as reported by licking. We think that late depolarization,
correlating with subjective percept, is interesting, because similar
results have been found in humans. And so here in the work
of Stanislas Dehaene and coworkers, he has made measurements
of the human EEG signal across early visual cortex,
and have given visual stimuli that are just at around the threshold
for conscience perception. And so some trials are perceived, and that’s indicated by the black trace, and other trials are not perceived, and that’s represented as a red trace. And what Stanislas Dehaene
and his colleagues suggest, is that there’s an early, reliable,
sensory response in the visual cortex, and that there are late changes
to activity in the human brain that correlate with consciously perceived,
or not perceived, stimulii. And so it looks like similar
might be happening in the human brain,
compared to what we’ve seen in the mouse. Now, what’s difficult
in the human situation, is to get a causal perspective
on what this late activity might be doing. And that’s something that we can do
in the mouse system. And so what we need to do then,
is specifically inactivate late periods of activtity. And we can do that by using
channelrhodopsin optigenetics, and stimulating the GABAergic
inhibitory neurons. And so here we take the parvalbumin expressing GABAergic neurons
that you might remember are the ones that inhibit perasematically. We put channelrhodopsin specifically
inside those neurons using mouse genetics, we can then get blue light flashes and then we can stimulate
action potential firing here in the parvalbumin expressing
GABAergic neuron. That’s going to then release GABA,
hyperpolarize the post synaptic cells, the excitatory parameter neurons
and then we’ll then get a hyperpolarization in
the membrane potential of these excitatory neurons. And so we’ll be able to shut down
primary somatosensory cortex by shining blue light
on GABAergic neurons expressing channelrhodopsin. So let’s have a look at that
in the situation of our experiments. Here’s our C2 whisker deflection, here’s our sensory response, the black curve is a normal
control condition, and if we now turn the blue light on, immediately around the time
of stimulation, we get this light blue curve here, we get rid of the early sensory response So now we shut down S1,
just at the time where sensory processing
of the stimulus occurs, and when we look at
the behavioral performance, that causes a deficit in the detection
of the sensory stimulus. And so that’s really very similar
to what we found with the pharmacological
inactivation experiments where we injected TTX
or [sinqx] and APV into S1 cortex, and found that caused a deficit
in detection performance. Here we see the same thing except now, we’ve done the inactivation on this time scale of 50 milleseconds and so we know that
it’s a neuronal activity immediately surrounding
whisker stimulus that drives sensory perception. The question that we were
really interested in, is what happens if we leave that
early sensory response in tact, and we only hyperpolarize
at late times. And that’s what you see
in this purple trace. We turn the blue light on, we stimulate the channelrhodopsin, after a hundred milliseconds
that causes hyperpolarization at late times, leaving the early
sensory response in tact. And the question now is,
if we just get rid of the late depolarization does that make any
difference in behavior? And indeed, it does. It causes a significant
deficit in detection. And so that late depolarization
not only correlates with the subjective percept
of the animal, but it also contributes
to driving that sensory percept. And so we’re slowly beginning
to understand mechanistically, the types of neuronal activity that occur in the mouse brain
during subjective sensory percepts. In this video we’ve seen that
head-restrained mice can be trained to perform
simple, perceptual tasks. And over the coming years,
we’ll get a detailed and mechanistic understanding of how simple,
sensory motor loops are performed. In the case of the detection task, we’ve seen that there appear to be
two phases to the sensory processing. An early, reliable response,
and a late depolarizatoin that seems to correlate
with the subjective percept. In the years to come,
it will be important to understand the mechanisms that generate
that late depolarization. And also the learning mechanisms
that are involved in tying together the sensory input, with the
goal-directed motor output. In the next video
we’ll begin to examine the basis of the learning process.

Leave a Reply

Your email address will not be published. Required fields are marked *