cognitive neuroscience and machine learning
Since I studies dynamic patterns, the real-timeness of experimental equipments are essential, especially when there is a closed-loop component (for which long latency may change coordination patterns qualitatively). When the design of the behavioral paradigm gets a bit novel, proprietary hardwares becomes really cumbersome and in some cases cannot at all satisfy the needs of the experimenter (e.g. for the “Human Firefly” experiment mentioned above). After a brief initial skepticism, I fell rapidly in love with open hardwares, which includes Arduinos (microcontrollers) and open source sensors. They gave me highly flexible control of real-time signal processing at an extremely low cost (comparing to, say, something from the National Instruments), along with accessible circuit schematics which I can modify to suit the need of a particular experiment. Without open hardwares, it would have not been possible to build a satisfactory apparatus (in a reasonable time frame) for the “Human Firefly” experiment mentioned above. I have since used them for varies experiments (for the behavioral components; you really do need something proprietary for MRI) and helped colleagues to incoporate them into their experiments.
At the beginning of my Ph.D. training, I found some signal processing procedures rather time-consuming (e.g. continuous wavelet transform, CWT). I started to look into using graphic processing units (GPU) to accelerate these repetitive computations and wrote a CWT algorithm with parallel convolution using CUDA C/C++ (a toolkit developed by NVIDIA for computation on GPU). With some moderate success, I began to think about a better application of CUDA - to do parameter exploration in parallel for a given nonlinear dynamical system. This idea was eventually realized when I was simulating eight-oscillator coordination problems (to understand empirical observations from the “Human Firefly” experiment above). The result surprised myself - more than one Terabyte of simulated data can be generated under 2 hours (which would take months with MATLAB’s ode15s for my particular model), which took more than 6 hours to write on a harddrive (later I realized that there is no need to save the data if the data can be simulated faster than it can be read from hard drive). This kind of performance can be achieved by using the most basic CUDA runtime library, not so much however by using higher level ones (e.g. boost) due to unnecessary data transfers between CPU and GPU memories.
Long before I learned anything formal about topology or coordination, the intuition was that complex adaptive systems learn (evolve, memorize etc.) by varying the topology of its communication structure. Not until I had studied spontaneous coordination among eight people both experimentally and theoretically, was I able to make it more concrete. A key perspective gained from studying multiagent coordination is multi-dimensional metastability (i.e. relative coordination). If two oscillators are coordinated metastably, they “come together” and “split” intermittently. The spatiotemporal pattern is together –> not together –> together –> not together and so on. It is simple because there is not much “space” in two. When more oscillators are coordinated, this “together” becomes “together with whom” - more complex spatial patterns are available; and when the coordination is metastable, we see switching between different spatial patterns. These “switching” are the defining points of the spatiotemporal structure - it is where the topology (topological type) of the structure (local in time) undergoes changes. Naturally, I need to compute the local topology somehow. The catch is that for metastable coordination, the components are never together but rather almost together. But how almost is enough? That is, the togetherness, along with the local topology, depends on the scale of description. I frequented Math department talks to hunt for tools and eventually found Persistent Homology - where “homology” is the topology part, and “persistent” is the multiscale part. I have done some initial explorations and found a few things quite interesting (including the A1 to A2 transition in the first figure). More work needs to done and may lead to a deeper understanding of “the organization of behavior” in complex systems.
The kinds of spatiotemporal patterns mentioned above are in terms of ODEs (speak of which, I have studied this chaotic attractor in terms of its stable and unstable manifolds). But of course, when it comes to pattern formation it is impossible to not get some PDEs involved. My entry point was to understand Turing patterns undergoing a primative kind of evolution, i.e. with a growing domain or a shrinking domain. How does the spatial frequency of the pattern changes? Is shrinking just the reverse of growing? You can find the answers in these slides here. Turing patterns are techinically static patterns (spatially non-uniform), but I am also interested in dynamics. Therefore, I also investigated another type of pattern, which is non-uniform in space but also oscillating in time. Take a look here for some dynamic patterns of Brusselator System in 1D.