Tuesday, November 11, 2008

Bio-electric slide

Microbial ecology meets electrochemistry: electricity-driven and driving communities.
http://www.ncbi.nlm.nih.gov/pubmed/18043609

I'm going to explore a different tactic for this week, which is provide a review of... a review! This field is fascinating but still nascent and therefore highly technical. Therefore I'll try to boil down an overview of this topic so something even shorter, simpler and, with luck, sweeter. You may ask about its relation to neuroscience but if you get to the end you'll see my plug for why neurons might be useful to consider.

The future of energy
As oil prices rise and supplies of fossil fuels dwindle, it has become clear the the future of energy lies in currently underutilized sources. In terms of generating electricity, probably the most important form of energy our world needs, we have heard a lot about solar and wind power lately. Hydro-electric power is seeing heavy use in developing countries such as China, but does have some significant ecological side-effects, especially if its implementation is not well thought-out. You may be aware of even newer and more unconventional technologies for energy generation, such as bio-diesel, ethanol, and methane extraction from farm waste (all of these methods rely on combustion).

The bottom line is, we are going to need to harness ALL available sources of energy that we can conceive of in order to meet our future energy needs, both static (the electric grid) and portable (vehicles). One rapidly developing area of research involves harnessing bacteria to generate power. Some early designs include the use of methane-producing bacteria to consume sewage or other waste water and release methane or other combustible gases.

Bio-electricity as an energy source
A newer direction for biological research in the field of energy generation is harnessing bio-electricity. All living cells are constantly generating a little bit of electricity in the form of a voltage across the membrane that separates the inside of the cell from the outside world. This lipid membrane forms what we call a 'capacitor'; that is, it separates charges and stores energy. The capacitor is charged by small currents that cross the cell's membrane, carried by ions (such as sodium and potassium) in water. This is as opposed to the system of electricity we typically think of, which involves (roughly) the flow of electrons through an organized metal lattice - a copper wire, for instance. It would be great if we could access that energy - however, this would require us to hook up wires outside the cell (not too much of a problem) and also inside the cell (well, that is a problem). How can you get access to the inside of a cell without killing it? The answer is, you can, but it's difficult and time-consuming. Some people are actually thinking about it, but we can come back to that later.

However, there are also bacteria, discovered in the early 20th century, that can actually generate electric currents outside of themselves, without any involvement (or at least minimal involvement) of the cell's interior. Now we're talking. How does this work? Interestingly, these bacteria facilitate the transfer of electrons from organic matter (ie. sewage or other organic waste) to metals like iron - or indeed, to an electrical anode. In a battery, the anode is the metal contact that receives electrons. Therefore, these bacteria, when mixed with organic matter, and grown on an anode (the bacteria tend to grow as a thin film, or biofilm) within a battery will actual power that battery. So far researchers have achieved near 1 Volt and several milliamps of power from a small bacteria-fueled battery. In larger installations, up to 1 kWh of electricity can be gleaned from 1 kg of waste (1 kWh, or kilowatt-hour, is the amount of power that ten 100-watt light bulbs use in an hour).

Although these amounts of power are small, and currently somewhat inefficient, this design is in its infancy. A further amazing feature of bio-electric batteries is that if the bacteria inside are grown over time (unclear from the article but I would guess days to weeks), the circuit becomes MORE efficient as the bacterial community develops and interacts. Perhaps the bacteria are trying to help us out with this energy problem?

Some researchers have, at a purely theoretical level, already begun to model the possibility of harnessing the type of electricity I mentioned beforehand. In other words, they are designing models that involve harnessing the voltage and currents that cells use across their membranes. Brain cells are some of the most electrically active, diverse, and efficient cells in the body. If we could find a way to harness the electrical energy of neurons or an artificial system based upon their physiology, we would have a real winner. Another cell type that merits study is the electric organ of the electric eel which can generate several hundred volts within close proximity to the snimal. I'm not suggesting that we have huge tanks full of eels to power our houses (although apparently they do in Japan) - but you certainly keep an eye out for these interesting ideas when it comes to the future of energy, and our world.

Friday, November 7, 2008

Never fear!

What a month! I wanted to give people a chance to catch up on some of the older posts which they apparently didn't have a chance to read. Now that everyone is caught up again, let's proceed! The hiatus certainly wouldn't have anything to do with my committee meeting, vacation, article submission, and broken computer...

Fear not ladies and gents, the next post will be up sometime tomorrow. Until then, cheers!

Wednesday, October 8, 2008

Things that glow in the night

I'm presenting something a little different this week. As you might know, the Nobel prize in chemistry was awarded to a trio of scientists for their pioneering work in the field of fluorescent proteins. Osamu Shimomura, Roger Y. Tsien, and Martin Chalfie, who shared the prize, have defined the field for the last 50 years. Dr. Shimomura identified the first two members of the ever-expanding family of these molecules, which are called aequorin and the aptly named Green Fluorescent Protein (GFP). Dr. Chalfie performed early work with the GFP gene. Dr. Tsien is the current guru who has purified many more versions of these proteins, and also created many modified versions that fluoresce different colors.

You might find yourself asking, what exactly is a fluorescent protein? A fluorescent protein contains a special chemical structure called a 'chromophore', a fancy term for something that can absorb light (sometimes several colors), and then release, or 'emit', it as one specific color (for our purposes). For example, GFP can absorb light in the UV and visible blue portions of the electromagnetic spectrum, and then emit green light. The actual chemistry is a little more complicated than that, but I'm no chemist so that's the best you're going to get out of me.

Why are these proteins important? They are fantastic laboratory tools to help us locate and track other proteins we might be studying. With the discovery of DNA, the elucidation of genes and now the sequencing of the entire human genome, we now have the capability to combine the sequence of the gene we are studying with the sequence for a fluorescent protein. This creates a hybrid, or 'tagged' protein, that contains the protein we are studying with a nice little light-sensitive tag. We can use this hybrid to determine what type of organs or cells our protein is found in, and with advanced microscope techniques we can now even follow the movements of individual 'tagged' proteins inside a single cell. We do this by flashing the appropriate type of light into the cell and looking for the fluorescent response of the tag. Additionally, some of these fluorescent proteins like aequorin will only fluoresce in the presence of other molecules like calcium. This allows us to use them as indicators, or 'probes', for these molecules within cells. Also, we can use multiple, complimentary fluorescent proteins attached to two different proteins we might be studying to see if these two proteins interact by creating a chain reaction of light. All told, this is a powerful technique because it allows us to use a minimally invasive, genetically encoded system to study our genes of interest.

I thought I would provide a brief review of some of the important discoveries or techniques that hinged upon fluorescent proteins.

  • ATP synthase movement [1997]. Researchers attached a long fluorescent protein to a protein called ATP synthase. ATP synthase is responsible for producing ATP, the main energy source used by the cells in our bodies. Think of it like a windmill producing renewable energy - not oil. In fact, the comparison is apt, because it turns out (no pun intended) that ATP rotates as it produces ATP. In fact, the synthase uses electrical energy in the cell to drive a 'crankshaft' that rotates a barrel-like portion on top that produces ATP. The researchers demonstrated this by literally taking very fast freeze-frame photos of the fluorescent marker that showed it turning around. This is one of those beautiful, elegant experiments that convinced me to pursue science as a career. I always find it amazing to think of microscopic proteins functioning as little machines. It really demonstrates that the fundamental laws of mechanics can operate on a minute scale.

  • Brainbow [2007]. Yes, you read that correctly. From the Center of Brain Science at Harvard, featuring Josh Sanes and Jeff Lichtman, scientists created a rainbow in the brain. Imagine every cell in the brain fluorescing in a different color. Well, that's a slight exageration, but using combinations of genetically encoded fluorescent proteins the researchers were able to generate 90 colors in the mouse brain. Since there were so many colors, it's highly likely that adjacent neurons, or neurons connected to each other, will be different colors. This allows researchers to differentiate between different components of brain circuits when recording electrical currents or imaging brain activity. Beyond just looking cool, it could prove to be a very powerful tool to help us elucidate how individual neurons contribute to the functioning of brain circuits, and complex behavior.
The uses of fluorescent proteins are legion. As scientists, we use them almost every day. They have been critical tools for basic science and drug discovery for many years, and their uses are still expanding.

Thursday, October 2, 2008

Fear and forget

Amygdala intercalated neurons are required for expression of fear extinction.
http://www.ncbi.nlm.nih.gov/pubmed/18615014

Disclaimer: I appreciate all the positive feedback on the blog that I've received over the last month. I'm glad you guys are enjoying it! That having been said, I do get the occasional request for a shorter blog entry. So I suppose once a month, you guys deserve something you can read and digest in 5 minutes or less. So here goes nothing. I hope you appreciate the fanservice.

Today's Article
There are many fearful things in this world. Not the least of them is returning home at 10 PM only to realize you have a blog entry due by the next day. In all seriousness, fear is an adaptive mechanism geared towards survival. It allows us to mount a superior response in situations where we require additional attention. For example, fear is that involuntary emotion that causes us to run very, very fast the other way when we see an angry editor (or hungry lion) bearing down on us. Nevertheless, it is critical that we do not become consumed by the long-term effects of fear, which can become crippling and debilitating. Post-traumatic stress disorder (PTSD) affects sufferers with extreme anxiety. The relationship between long-term fear (anxiety) and and acute fear is still poorly understood, but this study attempts to shed further light upon the normal pathways by which fear is handled in healthy individuals.

Today's researchers asked whether the process of reducing fear after a harrowing situation, and removing the emotion of fear from memories, can be traced to a single region of the brain. This process is called 'fear extinction'. In fact, they found it could be traced to a single cell type, named the ITC neuron. These neurons reside in discrete clusters within the central fear processing center of the brain, the amygdala. Using a nifty biochemical trick, the researchers were able to piggyback a toxic molecule onto a chemical signal that these cells normally respond to in the brain, but neighboring cells don't recognize. This technique is also seeing use in the treatment of cancer. They applied this concoction in the vicinity of the ITC cell clusters in a rat's brain. Once inside the cell, this toxic molecule specifically killed the ITC cells.

The researchers then tested the rats for their responses to fear. They found that rats with the ITC neurons missing were able to respond normally to acutely fearful situations, but they continued to show elevated fear much longer than normal rats (up to a week). This result was strongly suggestive that ITC neurons pay a significant role in fear extinction. Although the length of the study was relatively short, it provides hope that specific neurons whose activity might be targeted by drugs or other therapies, are involved in the processes that underly excessive fear. The study does not describe a link between this region and any of the (rather poor) animal models of PTSD. Nevertheless, it is intriguing that one neuron type could be responsible for such a complex behavior as fear extinction. Substantial further work remains to validate these cells as a therapeutic target for anxiety or PTSD, but the discovery of ITC neurons it is a significant milestone. An interesting next step for this research would be to use functional imaging to study these clusters in the amygdala of patients suffering from PTSD. This technique would allow us to determine whether this brain region is functioning abnormally - it could be that a PTSD event causes such a sustained, high level of fear that these circuits are 'overloaded' so to speak. If so we might be able to study these cells more carefully for drug targets. Treatment of deep brain regions is still very difficult, but some intervention and possibly prophylaxis (for soldiers) might be possible.

Wednesday, September 24, 2008

Replenishing the Brain

An intrinsic mechanism of corticogenesis from embryonic stem cells
http://www.ncbi.nlm.nih.gov/pubmed/18716623

Rating: Thought-Provoking

Article Summary:
The Hypothesis
The brain is an extraordinarily complex organ, and the portion of it we refer to as the cortex - the most recent portion, evolutionarily speaking - is the most complex. When you think of a stereotypical image of 'the brain', that's predominantly the cortex that you're picturing. It is generally thought that the highest level brain functions and calculations occur there. Like the rest of the nervous system, the primary type of cell used to send, receive, and process information of all sorts is the neuron, and the cortex contains many different varieties. We are only beginning to scratch the surface when it comes to understanding the differences, from the perspective of the calculations they perform, to the partners (other neurons) that they interact with, to how they are born and develop. Many terrible neuro-degenerative diseases lead to irreversible death of neurons; most neurons in the brain cannot regrow (note: the list of exceptions is growing). Parkinson's and Alzheimer's are two examples of such diseases. As such there is an abiding interest in the neuroscience community to discover ways that we might be able to replace lost neurons. One way might be through the use of stem cells - those amazing little cells that have the capacity to become any cell in the body if given the correct instructions. The researchers behind today's study successfully program mouse stem cells to become neurons, and can also successfully integrate them into the brain with almost no further manipulation. Although this field is still very young, it yields promise that we might one day be able to replenish damaged brains. That certainly leads us to some interesting questions concerning possible changes to perception, memory, and identity - but read on, we'll come to those issues later.

The Setup
As mentioned, the cortex is a vast and complicated region of the brain. Different neurons residing in the cortex 'project' (connect) to many different regions of the nervous system; all of them, in fact. The cortex exerts control over most concious thought and action, and even some involuntary functions as well. Within the cortex, neurons reside in different layers - there are six layers altogether - and their functions and connections to other neurons are mainly determined by their layer. The cortex itself is lossely divided into different regions representing different functions, such as movement control and visual processing. Information such as sights, sounds, and smells comes into the first three layers of the cortex (1 - 3), and any resulting changes or interpretations - 'processing' - of those inputs are sent out from the last three layers (4 - 6) although layer 3 also sends significant output. The better understood neurons in the cortex are the 'projection' neurons, which send and receive information via long arms called axons and dendrites, respectively for their functions. These are the wires of the nervous system. In the visual cortex, one of the better understood regions, calculations are performed in vertical columns as information flows up from layer 1 to 6. Raw information comes up from 'lower' brain regions, is processed in columns, and passed on for further processing in other columns - and finally is 'perceived'. The current study is not so concerned with the functions of the cortical areas, although eventually the researchers will have to show functional integration of their stem cell derived neurons. For now however, it is concerned with the typical placement of neurons in layers and regions, and with the connections of the those neurons to 'lower' brain regions, and other cortical areas. I hope I've provided enough information to show you that the cortex has a complex but in many ways predictable architecture, and this can be exploited for the purposes of studying integration of new neurons.

The experiment presented in this article is quite simple, yet elegant. The researchers exploited the fact that when grown in the absence of any other cues, stem cells form primitive neurons, often called precursors or progenitors. So they obtained mouse stem cells, and raised them in petri dishes under very minimal conditions. The researchers also took this one step further, and provided a chemical that inhibits the action of a protein called Sonic Hedgehog (no, I'm not joking). Sonic Hedgehog is a protein released by neurons that causes them to develop away from the type of cortical projection neurons I was discussing above, and instead become interneurons (another variety of neuron). Inhibiting its activity allowed the researchers to produce very nice projection neurons of different types. The type produced was dependent upon the amount of time that the stem cells were allowed to grow in their petri dishes. The longer the cells grew, the higher layer neurons they came to resemble; so after 10 days, the new neurons resembled layer 1 neurons. After 17 days, they resembled layer 3 neurons or above. These results are based on different proteins found in the neurons from different layers.

Then came the real test. The researchers surgically grafted their stem cell-derived neurons into normal adult mouse brains, in the frontal area of the cortex. They had engineered a protein that glows green into the new neurons to identify them as distinct from the original neurons. After a month, the researchers examined the results of the graft. Impressively, the new neurons appeared to have integrated fully into the mouse brains. They were arrayed as expected in layers, and also exhibited the expected pattern of axons and dendrites. There was one very unexpected aspect of their integration, however. Almost to a cell, the new neurons had all formed connections with brain regions involved in vision and image processing. This was in spite of the fact that they had been originally grafted into the frontal cortex, an area that is not associated with vision. They did not form connections with other brain regions. This demonstrates that specific types of projection neurons can be created from stem cells and properly integrated back into the brain at first glance.

What does the pithy scientist think?
Disclaimer: what follows is merely opinion, possibly speculation, and occasionally hearsay. But it's the best part, darn it!

I find this paper to be quite provocative, and it certainly sets the ball rolling for further investigation. Let's hit our pros and cons:

Let's review the points that support the hypothesis that stem cells can be instructed to produce neurons that can successfully integrate back into the adult brain:
  1. Stem cells can easily be instructed to form cortical projection neurons.
  2. These neurons contain the expected proteins that define different type of cortical projection neurons.
  3. When Introduced into adult brains, these new neurons are appropriately located and oriented, assessed visually.

What are some shortcomings of the paper?
  1. There is no evidence that the new neurons, observed visually, actually function as expected in the appropriate brain circuits.
  2. There is no evidence that the new neurons could help an injured mouse recover lost brain function.
  3. It is not clear we can create anything other than projectiong neurons geared toward the visual system, although if I had to guess I'd say it probably would be possible.

What further experiments should be done?
  1. The researchers need to find mice with damage to their visual cortex, and see if grafting in these neurons can recover lost visual function.
  2. We need to discover if other growth conditions can lead to the development of other types of cortical neurons (auditory, motion control, and so on).
  3. I would be interested to know if the researchers can develop sub-cortical neurons, from the so-called 'lower' brain regions, in a bid to address damage to the spinal cord.

Replenishing neurons in the brain that have been lost to damage or disease is a fantastic idea, but is not without its share of ethical implications. Obviously it will take a lot of study to show that stem cell grafts do not cause any adverse pathological effects, such as cancer or epilepsy. But beyond the obvious health concerns, come the interesting issues of how much the different neurons in our brains define who we are. A heart, a kidney, a ligament - these are, as we understand them, just machines that allow our bodies to survive. Our brains are a different matter altogether. Even the seemingly unobtrusive idea of replacing lost neurons within the motion and motor function circuitry might alter not only our ability but our preference for different motions. Replenshing dopamine neurons lost to Parkinson's might dramatically affect our sense of reward, success, and satisfaction. This is to say nothing of restoring neurons to areas of the brain associated with memory or personality. I do not possess any great insight into the approach to these ethical dilemnas, other than to repeat the age old mantra that only fools rush in. We must take our time and fully understand the consequences of how these potential treatments might affect our humanity before jumping blithely in.

Wednesday, September 17, 2008

Through the retina, darkly

The circadian clock in the retina controls rod-cone coupling.
http://www.ncbi.nlm.nih.gov/pubmed/18786362

Rating: Slam Dunk

Article Summary
The Hypothesis
We perceive light and images using a sheet-like structure stretched across the rear of our eyes called the retina. Within the retina there are two types of cells that detect light; the conventional view held that the retina used 'cone' cells to sense light under normal conditions, and 'rod' cells in low-light or dark conditions. In this study, researchers turn that view on its head by demonstrating that our use of retinal rods in darkness is not dependent upon the actual amount of ambient light, but instead depends on our body's sleep-wake cycle (aka. circadian rhythm). Even more exciting was their discovery that when active at night, rods actually usurp control of cones to (hypothetically) increase the amount of low-light information being sent to the brain.

The Setup
The question at the heart of this study is surprisingly straightforward, despite the complex hypothesis. First some brief background: we have two systems for perceiving light in our eyes, both located in the retina. The cone system involves cone-shaped cells of three types, that each respond to a different type of light; roughly, red, green, and blue. These cells are really fast; if you're familiar with movies, you could say they have a high framerate. That is, they are capable of prcoessing many different images per second. The rod system involves - you guessed it - cells shaped like rods that are great at detecting really small amounts of light - but they are slow as molasses. If we used them all the time we'd never be able to drive, play baseball, or watch our favorite YouTube videos. Also, there's only one kind of rod meaning that the images they generate are monochrome. The question is this: how does our brain decide which of these two systems to use? Is it based on the amount of ambient light around us? Do we mix the two systems together in some way? Do we turn off the cones at night, or the rods during the day? A predominant theory prior to this work held that there were separate circuits from rods and from cones, like different video feeds or cameras in a newsroom, that connected to the brain. The brain then decided which system to use after some processing... sounds complicated doesn't it? Well, the answer is actually quite elegant and simple, and today's researchers are going to show you how it all works.

The Experiment
To come up with a hypothesis, the researchers drew upon two recently established features of the retina. First, it was found that rods and cones are electrically coupled to one another via structures called gap junctions. Gap junctions are protein bridges that allow charged ions (and other small particles) to move between cells, as if some one had strung a wire between them. However, previous work had shown that these connections were very weak - perhaps there were not that many junctions, or many of them were closed. Critically, those experiments were performed during the daytime. The second feature of note is that dopamine is released within the retina according to a sleep-wake, or circadian cycle. Dopamine is chemical used within the nervous system to transmit information; such molecules are termed neurotransmitters. It is traditionally associated with enjoymentof pleasing activites, and pathalogically its abundance can lead to addiction while its absence is a hallmark of Parkinson's disease. Dopamine is released at high levels during the waking part of the cycle, or 'subjective day', and drops low during the sleep cycle, 'subjective night'. Moreover, proteins that respond to dopamine (termed dopamine 'receptors') are found in rod cells. Combining these two pieces of information allowed the researchers to hypothesize that circadian control of dopamine levels might serve to alter the electrical coupling between rods and cones.

The researchers used two methods to study this question. In the first, they loaded individual cone cells in goldfish or mouse retinas with a visible tracer dye. This dye is small enough to pass through open gap junctions and should therefore be able to spread from the initial cell to any connecting cells, as through a web. So they waited for a while to see how far this dye might travel away from the initial cell. During the subjective day, they observed that the dye didn't spread very far - 5 to 6 cells in addition to the original cell they loaded. Subjective night, however was quite a different story. At night, the dye spread to THOUSANDS of other cells. Yes, you just read that right. That's the kind of scale of effect we scientists dream about. This result demonstrated that gap junction electrical coupling was indeed weak during the day, but very strong at night. To figure out if dopamine was involved, the researchers performed the experiment again, but this time in the presence of chemicals that block dopamine receptors on rod cells. This time they found that the dye spread to thousands of cells at any time of day, indicating that dopamine was preventing or inhibiting these electrical interactions.

Next the researchers turned to my favorite technique, patch clamp, to figure out the consequence of this increased gap junction coupling at night. Patch clamp is a powerful technique that allows us to record the electrical currents used by cells in the nervous system to communicate information, through the use of tiny electrodes. The researchers isolated intact goldfish retinas and were able to record the live electircal signals from individual rods and cones within the entire network of retinal cells. They compared the electrical signals from rods and cones in both subjective day and night, and the differences were incredible. During the subjective day, the signals were what you would expect; cones looked like cones, rods looked like rods and there was little obvious cross-talk between them. At night, however, things were quite a different story. Rods still looked like rods, but amazingly the cones also now looked like rods! They had the same monochrome light sensing, and the same slow response time. It seemed as though the rods had take over the usual cone function and circuitry to amplify their own signals, while also supressing that of the cones. The researchers also took the study one step further, to see what would happen if they darkened the retina during the subjective day, or shone a bright light on it during the subjective night. What they found was that bright light was able to reverse the rod take-over and make cones act like cones again. But when they tried the reverse - to darken the retina during the subjective day - there was no effect. The rods were only able to usurp the regular functioning of cones when the animal's body thought that it should be night.

What does the pithy scientist think?
Disclaimer: what follows is merely opinion, possibly speculation, and occasionally hearsay. But it's the best part, darn it!

This study is the scientific equivalent of a home run: clearly demonstrable, huge effects that are definitive and reverse a conventional viewpoint. It's amazing to think of a cell type that can take over its neighbors during the night-time when they are not needed, and relinquish them during the day. It's a bit like distributed computing (some good reading there). In all honesty, I don't even need the pros and cons points for this study because it's pretty darn airtight. The only complaint you could possibly make would be that the study wasn't conducted in humans, which would of course be impossible with current technology. But I'll spit out some pros just to make the authors feel even better about themselves:

Let's review the points that support he hypothesis that rods take over cones during the night:
  1. There is a vast increase in electrical connections between rods and cones at night.
  2. Mucking about with dopamine during the daytime can evoke the same increase.
  3. Cones behave exactly like rods during the night-time.
Although there aren't any obvious cons, at least to my mind, there certainly is room for more questions and further unique hypotheses - for example (and these ARE hypothetical):
  1. The brain never pays attention to the rods - and to get noticed, they have to take over the cones and use the cone circuitry to convey their messages to the brian.
  2. Rods suppress cones during the night-time because the cones become too noisy when confronted with low-light situations and the brain can't interpret them. This is like trying to listen to music if someone is jackhammering outside your window. There's so much ambient noise, that it's difficult to hear the 'signal', or music.
  3. Cones actually DO become rods during the night... in other words, they innately change their responses to light. Okay, this one's a stretch, but it's possible.
I would love to see someone look into those.

This study underlines a fundamental feature of biology that is often lost in the details - that the body is incredibly economical in its use of resources. The same central circuit being used for different input systems is an impressive way to leverage an existing energy and structural investment. It certainly is reminiscent of a multitasking computer system... or is the other way round? It's also beautiful mathematics. Is it by necessity, or pure chance? We may never know. But we can certainly marvel at its elegant simplicty.

Thursday, September 11, 2008

Where is that little boson?

I would be remiss if I did not mention that the newest particle accelerator in the world came online yesterday with its first full circuit test. Meaning that those wacky physicists shot a beam of protons around the entire 27 km ring. It's a good thing there were no beer bottles stuck in it this time.

I have a very good friend who is a theoretical physicist and I asked him about the supposed danger in switching this thing on. You may be aware that there is a small number (really, just one) of physicists who are going crazy over the possiblity that the LHC could create a small black hole that would swallow our entire planet. My friend admitted that he and his colleagues had not, at that time (that being 2 years ago or so), considered the possibility. However, upon substantial review, they determined that it would be impossible to create such a black hole and now the physics community is convinced the LHC is perfectly safe - except for that one guy who can't let his theory go. We do encounter these people from time to time in science, and it's unfortunate, because the rule of science really is this: develop a hypothesis, and then attempt to DISprove it. If you just try to prove the hypothesis, you never consider any contrary information, and according to my friend, that's what this crazy physicist is doing. So if it's any consolation - I am supremely confident that we are all 100% safe. Good luck to my friend, I hope the data he obtains from this thing validates the years he's spent working on it.

Wednesday, September 10, 2008

Do rats have values?

Neural correlates, computation and behavioural impact of decision confidence.
http://www.ncbi.nlm.nih.gov/pubmed/18690210

I would like to start off this post with a disclaimer: in no way am I claiming that rat cognition is on the same level with humans. I merely suggest that as we learn more and more about the rodent brain, the distance that separates humans and other animals become smaller and smaller. Last year came the suggestion that rats can laugh. Today there is the suggestion that rats assign complex values to decision making. None of these studies in isolation prove these phenomena, but evidence is mounting that at the genetic, cellular, and organ levels the tangible difference between rodents and man is shrinking. This makes it all the more shocking that humans, quite obviously, operate at a much higher cognitive level than rats; the use of tools, the building of complex structures, the creation of language and music, and so on. At least that is our perception; in fact, the candidate list for what 'makes us human' seems to shrink with each passing year. I and my colleagues in science are not trying to suggest that humans and rats think in exactly the same way. This is not the case; we are clearly different. The question that continues to elude us is, what lies at the heart of the distinction? Things that once seemed obvious (laughter, decision-making not based on binary instinct) are no longer so.

Article Summary
The hypothesis: Researchers claim to have discovered neurons in the rat brain that assign a confidence value to a decision - in other words, how confident am I that this choice I am making is correct? This type of brain function was previously thought to be the exclusive purview of primates.

The Setup: Researchers became interested in the following question: do rats have any capacity to assess the quality of their choices, prior to making them? It's a question of confidence, and here's an example: if I tell you to close your eyes, feed you some cheese and then ask you to identify which is cheddar and which is Swiss, here's how your thought process might unfold. First, you'd smell the cheese, perhaps feel it, and then taste it. Then, based on your prior knowledge of cheeses, you'd hazard a guess as to which is which. If you always order the cheese plate for dessert at your favorite restaurant, you'd be pretty confident in your guess. But if you're trying to lower your cholesterol and are laying off the cheese lately, you might feel a little uncertain. That's a poorly controlled study and wouldn't hold up in the lab, but you get my point. For millennia, scientists and philosophers have concluded that this thing called confidence is an exclusively human, and later at least an exclusively primate trait. Today's researchers want to show that this is not that case, and that rats can do it too. Here's what they did.

The Experiment: The researchers actually chose a (better controlled) version of the story I just fed you. First they poked 3 holes in a box; through the center hole, they puffed a combination of two odors mixed in different ratios. That's critical. Then, depending on the ratio of odors, they put a tasty piece of cheese (just kidding - they actually gave the rats water, which they love) in one of the two holes. If the amount of odor A in the mixture was higher, the water was always delivered to the left-hand hole. For odor B, the opposite was true. Take it from me - rats don't work for free.

The researchers had implanted little electrodes in the rats' brains so they could record the activity of the neurons in a region of the brain called the orbitofrontal cortex (or OFC, for short) during the experiment. The OFC is situated in the front of the brain, an area traditionally associated with decision-making, confidence, morality, and other heady stuff in humans. So, they stuck the rats (one at a time, mind you) in the box and fired up their little scent-puffer. They puffed in the 2 odors at different ratios and let the rats make their decision about which hole to run to in order to get water. They also varied the time until delivery of the water, so that in some later versions of the experiment, the rats could choose to give up and start over.

Researchers looked at the activity of those OFC neurons curing the course of the experiment. They saw two kinds of neurons: the first became more active when the mixture of odors was close to 50/50 and less active when the mixture was dominated (say, 80/20) by one odor. The activity of the second kind of neuron was exactly the opposite. The key to the activity patterns, however, was the timing. It has previously been reported that these two populations of neurons represent the response to obtaining or failing to obtain the reward - in other words, more activity would mean 'Yay, I got my drink!'. However, the researchers had good enough time resolution to observe that the neurons actually started to become more (or less) active nearly a second BEFORE the rat arrived at the hole to claim its thirst-quencher. This implies a greater or lesser sense of expectation, otherwise known as... confidence? They graphed the neuron activity (on the y axis) versus the odor mixture composition (from 100/0 to 0/100, on the x axis) and the graph had a unique looking curved V shape.

The mid-decision neuron activity pattern, when graphed, closely matched a mathematical model of confidence that simply combined the odor stimulus and distance of the stimulus from a 50/50 mixture. More complex models based on learning and memory did not match the neuronal activity. Finally, the researchers added a little twist to the experiment. The rats were allowed to give up early and restart the test if they (presumably) were not confident about their decision. In fact, the rats frequently gave up when the odor mixture was close to 50/50, but rarely when the odor mixture was dominated by one odor.

What does the pithy scientist think?
Disclaimer: what follows is merely opinion, possibly speculation, and occasionally hearsay. But it's the best part, darn it!

Let's explore the points that argue in favor of the hypothesis that rats possess 'confidence neurons':
  1. Neurons in the OFC displayed different activities before a decision on which hole to run to for water was made. Some neurons became more active when the odor mixture was near 50/50; this activity may occur when the rat thinks it is making a wrong decision.
  2. The confidence model predicts the exact and rather complex neuron activity observed while rats are making their choices. Models based on learning and memory alone do not produce the observed pattern.
  3. Rats were much more likely to give up on a reward and restart the test if the odor mixture was near 50/50. This suggests that they are capable of uncertainty.
What about arguments against that hypothesis?
  1. This is an odor test. Rodents live for odors. As one adviser once told me, you could take out 90% of a rat's brain and they'd still ace any odor test. That makes subtle differences like those observed in this study a little suspect.
  2. The confidence model used in this study is based on very frew factors. Is that all there is to confidence? It seems - dare I say it - overly simplistic. Also, when graphed on the same axes, it did not perfectly match the actualy neuron activity data.
What further experiments would support this hypothesis?
  1. I'd really like to see these neurons monitored while testing different sensory modalities - sight, touch, taste, and sound. Smell alone makes me nervous.
  2. Other strains of rats, and mice should be tested for the same behavior. And maybe some more mammals while you're at it. Dolphins anyone?
  3. The model seems a little simplistic. It needs to be expanded. For example, I'd like to understand why some neurons fire more when the rat is not confident in its choice, while others fire less. Why the two populations? There is the caveat, however, that the brain is typically a sort of 'push-pull' system, with some neurons pushing the animal to act one way and different neurons pulling the animal in a different way. The actual behavior of the animal is dictated by the neurons that win that push-pull battle (typically the ones that are more active, in a reductionist model).
Nevertheless, this is an intriguing study. If true (and it will require more test to prove), it actually provides us with two valuable insights. One, that primates are not alone in our possession of this magical 'confidence'. Two, it demonstrates how probability might be encoded as a pattern of neuron activity by our brains. As I stated in the introduction, this study suggests that another wall we have constructed between ourselves and the so-called animals that live around us could come crashing down. I won't enumerate all of the ways in which scientists and philosophers have in the past sought to separate us from other living creatures, but suffice it to say that number is shrinking at a faster and faster clip. From an evolutionary perspective, though, it comes as little surprise - the slow rate of evolutionary change nearly dictates that for most higher order functions that we typically ascribe to humans, there must be at least some rudimentary correlate back down the chain of evolution. Now, it is only a question of how far back we can look, and as this study suggests - we may have to look much further than we thought.

Friday, June 20, 2008

Preparations - mission statement

Scientific conversations for everyone.

New Hypothesis is a blog about science. No, not one of those scary, incomprehensible reviews of dense, impenetrable literature. And no, not one of those fluffy, shallow and frequently inaccurate newspaper "What's New in Science" stories, either. New Hypothesis strives to be something different, something better - to bridge the gap between primary scientific literature and the world at large. I will offer a fresh, engaging take on recent scientific developments, and try to frame them in the greater context of biology and society - but without sacrificing accuracy or depth. And if I'm lucky, these posts will turn into a lively forum for discussion of larger scientific and social issues with the occasional (drum-roll, please) new hypothesis.

I'm going to start blogging about topics from my field, neuroscience - that's the study of the brain. Studying the brain includes every step and level you could imagine from how the individual proteins in your brain interact to produce perception, thought, and action. Plenty of grist for the mill in there. But this is a no-holds-barred kind of blog, so expect to see all sorts of interesting science turn up from time to time.

What makes me qualified to attempt such a thing? I am currently in pursuit of my PhD in neuroscience, and I have spent over 10 years working in the sciences, both in academia and industry. You have my outspoken dedication to quality over quantity. And my somewhat irreverent sense of humor. Hopefully that will do.

My goal is to provide one substantial review of recent research per week. This may be based upon one study in particular, or it may be a larger review of an exciting theme that I wish to explore. This review will consist of two halves, 1) a specific review and exploration of the pertinent research and 2) my own perspective, thoughts, commentary, ideas, and just way out there hypotheses as to how this research fits into the greater picture of human endeavor. I may also post the odd idea or comment throughout the week. But this blog won't work without your participation; please read, respond, and interact with other posters using the Comments feature.

I will be spending the next 2 months or so doing some background reading and research. I hope to launch the blog in mid-September. Please check back with me then. So long - JSK