My name is Mason Porter. I am a Professor in the Department of Mathematics at UCLA. Previously I was Professor of Nonlinear and Complex Systems in the Mathematical Institute at University of Oxford. I was also a Tutorial Fellow of Somerville College.
Sunday, September 12, 2010
LOLCAT Method
That's right. There is apparently a LOLCAT method, which is a particular implementation of the Gillespie Algorithm.
The stupid thing is that's actually in a related field to mine (I also do exact stochastic simulation with a strong relation to the Gillespie method, though not on chemical reaction systems), but I can't figure out why they decided to call it LOLCAT, it's like the name exists to bludgeon people with the choice of name rather than being a convenient mnemonic that's memorable. I just skimmed the paper but didn't see anything that suggested why it was used - their actual method isn't very interesting [to me] due to differences in system state space / reaction spaces.
I couldn't figure out why either. I glanced through the paper and gave up trying to figure out the origin of the name after a few minutes.
I didn't attempt to decipher the science in the paper; the purpose of my glancing was only to try to decipher the acronym.
Such bludgeoning is a really bad idea if there isn't something sensible behind it. Of course, they did only publish this in PLoS One rather than somewhere decent.
I did decipher enough of the science to figure out it wasn't useful for my cases: the general Gillespie method is usually used to stochastically simulate chemical reaction networks, which are interesting in that they have usually a fixed set of reactions, and a fixed set of reactants [which leads to a huge state space as those are discretized, but the actual set of named reactants is usually small-ish].
My system has a massive number of reactions (given one perspective on what a reaction is - a particular combination of two reactants) and an exponential state space of possible reactants that we'd really rather not enumerate. So many ways of improving the Gillespie algorithm just don't apply to these systems - they typically involve speeding up the questions of which reaction happened next, lowered the RNG requirements, etc. In my system the critical path is not actually RNG generation or reaction decision [though I did do some improvement there that helped, it's not getting better than this asymptotically at least], but rather the maintenance and generation of which reactions are possible when entering a given state [and their rates].
So typically that type of paper isn't very useful for me, but I do read many like that just in case there is something that applies - e.g. several of the Bruck + Gibson papers were fundamental in building my knowledge of the Gillespie algorithms and the theory behind them [plus, Shuki's upstairs and is always very interesting to talk to!].
Papers by locals are key for exactly that reason. One just can't duplicate having some chats in person.
(Actually, this is why I have been recruited to give a presentation next week before I head off. We have a working group on spectral methods in networks, and one of the big things is to try to do something with community detection. I have written a survey article on this topic, so I'll be giving a presentation on strengths and weaknesses of various popular methods and also on big open problems on which the working group can perhaps make some progress.)
I wonder if we have anybody at Oxford who does something directly relevant to you? (We do have a large number of computational bio people.) Then it would be nice to try to figure out a way for you to visit that combines work with hanging out.
Yep - Andrew Turberfield is in my general field (DNA computing / molecular programming, which sounds similar but is distinct from computational biology). I had some good chats with his students at FNANO this year - I think I'm much more on the theory side, but one of the other grad students here collaborated with their group on a molecular motors project a bit ago.
The stupid thing is that's actually in a related field to mine (I also do exact stochastic simulation with a strong relation to the Gillespie method, though not on chemical reaction systems), but I can't figure out why they decided to call it LOLCAT, it's like the name exists to bludgeon people with the choice of name rather than being a convenient mnemonic that's memorable. I just skimmed the paper but didn't see anything that suggested why it was used - their actual method isn't very interesting [to me] due to differences in system state space / reaction spaces.
ReplyDeleteI couldn't figure out why either. I glanced through the paper and gave up trying to figure out the origin of the name after a few minutes.
ReplyDeleteI didn't attempt to decipher the science in the paper; the purpose of my glancing was only to try to decipher the acronym.
Such bludgeoning is a really bad idea if there isn't something sensible behind it. Of course, they did only publish this in PLoS One rather than somewhere decent.
I did decipher enough of the science to figure out it wasn't useful for my cases: the general Gillespie method is usually used to stochastically simulate chemical reaction networks, which are interesting in that they have usually a fixed set of reactions, and a fixed set of reactants [which leads to a huge state space as those are discretized, but the actual set of named reactants is usually small-ish].
ReplyDeleteMy system has a massive number of reactions (given one perspective on what a reaction is - a particular combination of two reactants) and an exponential state space of possible reactants that we'd really rather not enumerate. So many ways of improving the Gillespie algorithm just don't apply to these systems - they typically involve speeding up the questions of which reaction happened next, lowered the RNG requirements, etc. In my system the critical path is not actually RNG generation or reaction decision [though I did do some improvement there that helped, it's not getting better than this asymptotically at least], but rather the maintenance and generation of which reactions are possible when entering a given state [and their rates].
So typically that type of paper isn't very useful for me, but I do read many like that just in case there is something that applies - e.g. several of the Bruck + Gibson papers were fundamental in building my knowledge of the Gillespie algorithms and the theory behind them [plus, Shuki's upstairs and is always very interesting to talk to!].
Papers by locals are key for exactly that reason. One just can't duplicate having some chats in person.
ReplyDelete(Actually, this is why I have been recruited to give a presentation next week before I head off. We have a working group on spectral methods in networks, and one of the big things is to try to do something with community detection. I have written a survey article on this topic, so I'll be giving a presentation on strengths and weaknesses of various popular methods and also on big open problems on which the working group can perhaps make some progress.)
I wonder if we have anybody at Oxford who does something directly relevant to you? (We do have a large number of computational bio people.) Then it would be nice to try to figure out a way for you to visit that combines work with hanging out.
Yep - Andrew Turberfield is in my general field (DNA computing / molecular programming, which sounds similar but is distinct from computational biology). I had some good chats with his students at FNANO this year - I think I'm much more on the theory side, but one of the other grad students here collaborated with their group on a molecular motors project a bit ago.
ReplyDelete-Joseph