Title: Tyramine functions independently of octopamine in the Caenorhabditis elegans nervous system
Summary: The year was 2005, and no one knew if tyramine was important in C. elegans. It was known that octopamine was found in C. elegans extracts, and that exogenous octopamine could manipulate C. elegans behavior, but no one had yet found a tyramine hydroxylase gene that could turn tyramine into octopamine in animals. And importantly, no one knew how octopamine and tyramine differed in their roles in worm behavior.
There are lots of optogenetic actuators. And they respond to diverse wavelengths. What if you could systematically profile how an organism or population of cells responds to a precise temporal stimulation profile in 3 colors?
Well, now you can. Because I built a board to do it. Here's an example of random stimulation wellof each well in a 96-well plate with blue and/or green light.
If this sounds like it might be useful to you - awesome, let me know! I'll be putting up the board schematics, bill of materials, and source code shortly so you can build your own.
Where do you get the files? Here! And if you get stuck on building it, hit me up and I'd be happy to help.
Summary: In this paper, they are aiming to make a fast, sensitive and specific optically-controllable protein interaction. At first, they note that there are a number of photoactive proteins that have been engineered for use as photoswitchable actuators, including the LOV2 domain of phototropin 1, Vivid (VVD) and a bunch of others (cryptochrome 2, FKF1, UVR8, EL222). Point blank, I know nothing of any of these so I really can't comment on how this fits in with the field (maybe I'll try to come back to this later).
In this work, they focus on Vivid (VVD), from Neurospora crassa. It's one of the smallest proteins, uses FAD as a chromophore (which is ubiquitous in eukaryotic cells), and homodimerizes when blue light is applied. However, it's got drawbacks.
It homodimerizes. Imagine you want to make gene A active when you apply light. You split gene A in half (A* and A') and make and express two fusion proteins: VVD-A* and VVD-A'. When you shine blue light, you're as likely to match A* with A' (its correct partner) as with A* (not its correct partner).
It's slow. After you stop the blue light, it takes 3-4 hours before the dimers separate back into monomers.
When fitting fails, it fails for basically two reasons
bad initial conditions
bad model (ill-conditioning/multicollinearity)
90% of the time it's the initial conditions. In fitting, finding good initial conditions is usually one of the hardest things to do. You expect the fitter to do all the work but its really not that good - it just refines solutions when you're already in the vicinity of the solution.
As a RULE, always plot these three things overlaid:
Plot the data you want to fit, AFTER any transforms you might apply (for example, if you're going to fit the log-transformed data, PLOT the log-transformed data).
Plot the predictions based on the initial guesses for the parameters.
Plot the predictions based on the fit parameters.
How do we obtain good initial guesses?
One option is to guess some parameters (guided by what they mean in your model), check to see if they yield predictions anywhere in the vicinity of the data they're supposed to fit. This often works okay if you're only fitting one dataset. But what if you've got to fit 100 datasets? Then it's unlikely that a single initial estimate will work for all of them, and we're going to need to come up with initial estimates for each one.
A better option is to use heuristics to get you to ballpark estimates, correct to perhaps an order of magnitude. Examples of heuristics:
If I were fitting a straight line, e.g. y = mx+b (note that this is just a toy example and you'd never use an iterative solver to fit this), i might get an initial guess for the slope parameter m by first sorting my data in order of ascending x, and then using (y_last-y_first)/(x_last - x_first).
If I were fitting a logistic function: y= K / (1+P * exp(-r*x)), I might choose K = max(x), because in my model, K is the maximum value it ever attains. For an initial guess for P, since I know at x=0, then y=K/(1+P), i might see if i have a datapoint around x=0. If I do, call it (x*, y*), then a decent initial guess for P might be P = (K/y* - 1), or max(x)/y*-1.
Fitting on a log or linear scale (and transforms more generally):
While I can fit models many ways, one of the most common issues I've had arise is whether to fit them on a log or linear scale. These do NOT provide the same result - e.g. fitting y=mx+b, or fitting log(y) = log(mx+b). Why? Because when we fit, we're implicitly trying to minimize the overall difference between the left-hand side and the right hand side of the equation. Strictly, most fitters default to minimizing the sum of squared errors between the left- and right-hand sides (observed data and predicted data, respectively). On a linear scale, the difference between 10 and 100 is a lot more than the difference between 1 and 20, whereas on a log-scale, the latter is a much larger discrepancy.
Practically speaking, the way to choose whether to fit your data on a linear or log scale is to ask - do i care about absolute deviations? or relative deviations? Suppose I have some data (y) that span say four orders of magnitude, from 0.01 to 100.
This comes down to - do I believe the errors in my data are additive or multiplicative? In the former, fit the data on a linear scale. If errors are multiplicative, fit in on a log scale.
Instead of using a function that fits formulas, it often helps clarify the problem to formulate it as an optimization (maximization or minimization problem), and forces you to think clearly about what fitting actually means. Typical fitting function (e.g. lm() or nls() in R) use the sum of squared errors between the data and the predictions.
Title: Glia promote local synaptogenesis through unc-6 (netrin) signaling in C. elegans
Summary: The underlying question in this paper relates to: How do neurons know where to form synapses? Clearly this is a question I'm into, since this is the third paper I've looked into on this question (the first post here, on a paper from the Kaplan lab, and the second one was in yesterday's post, on a paper from the Schafer lab).
Summary: Okay, this one seems really straightforward (and nice and short). Their big question: can you engineer synapses between particular neuron pairs in C. elegans?
As a starting point, they decide to try to create novel electrical synapses (gap junctions), which have been previously created in other systems by overexpressing a single innexin (invertebrates) or connexin (vertebrates). They reason that expressing connexins (again, from vertebrates) in the worm is a better strategy than innexins, as both families of proteins can form gap junctions with a mixture of within-family proteins (e.g. a gap junction may involve several different connexins, or several different innexins, but not a combination of both). Therefore ectopic expression of an innexin would likely lead to all kinds of undesired gap junctions forming due to all the natively expressed innexins already present in neighboring neurons.
Title: Memory in Caenorhabditis elegans Is Mediated by NMDA-Type Ionotropic Glutamate Receptors
Summary: Okay, this is my second paper from Villu Maricq's group, and incidentally, from the same year. Must have been a good year.
In this paper, their question was: are glutamate receptors required for memory in C. elegans? AMPA and NMDA receptors have been implicated in memory in many systems, but not C. elegans. Specifically, in many systems neural activity modulates AMPA and NMDA receptor cycling and thus affects synapse strength.
Title: Hierarchical sparse coding in the sensory system of Caenorhabditis elegans Year: 2015 Summary: The big question in this paper is how do different neurons convey sensory information about various stimuli.
To start, they generated a library of 19 strains, in total expressing GCaMP3 in 28 different neurons or neuron groups. They then the calcium response from >5 animals from each strain in response to a panel of 13 stimuli. Those stimuli included isoamyl alcohol (on and off), diacetyl (on and off), salt (on and off), pH (low and high), osmotic stress (on and off), E. coli supernatant (on and off), and blue light (on only). I believe all of these have been previously tested on specific C. elegans neurons, but this sort of systematic all-vs-all study has never been undertaken. Clearly, this is direly needed information to parameterize models of pan-neuronal activity!