TDS Meeting

Time: Fridays 12am-1:30pm, Fall Semester 2020 (first Meeting 9/4)
Location: Over Zoom
Organizers: Nancy Lynch (lynch at csail dot mit dot edu)
my photo

(Tentative) Schedule

(9/4) Informal meeting of TDS group members.: Nancy Lynch

  • Every presents what they are working on.
  • Explain the format of the TDS meeting

(9/11) Spiking Neural Networks Through the Lens of Streaming Algorithms: Yael Hitron

    Abstract:

    We initiate the study of biological neural networks from the perspective of streaming algorithms. Like computers, human brains suffer from memory limitations which pose a significant obstacle when processing large scale and dynamically changing data. In computer science, these challenges are captured by the well-known streaming model, which can be traced back to Munro and Paterson ‘78 and has had significant impact in theory and beyond. In the classical streaming setting, one must compute some function f of a stream of updates S={u1,...,um}, given restricted single-pass access to the stream. The primary complexity measure is the space used by the algorithm.

    In contrast to the large body of work on streaming algorithms, relatively little is known about the computational aspects of data processing in biological neural networks. In this work, we seek to connect these two models, leveraging techniques developed in for streaming algorithms to better understand neural computation. In particular, we consider the spiking neural network model, a distributed model of biological networks in which nodes (neurons) are connected by edges (synapses), and communicate with their neighbors via spiking (i.e., firing). Our primary goal is to design networks for various computational tasks using as few auxiliary (non-input or output) neurons as possible. The number of auxiliary neurons can be thought of as the ‘space’ required by the network.

    Previous algorithmic work in spiking neural networks has many similarities with streaming algorithms. However, the connection between these two space-limited models has not been formally addressed. We take the first steps towards understanding this connection. On the upper bound side, we design neural algorithms based on known streaming algorithms for fundamental tasks, including distinct elements, approximate median, heavy hitters, and more. The number of neurons in our neural solutions almost matches the space bounds of the corresponding streaming algorithms. As a general algorithmic primitive, we show how to implement the important streaming technique of linear sketching efficiently in spiking neural networks. On the lower bound side, we give a generic reduction, showing that any space-efficient spiking neural network can be simulated by a space-efficient streaming algorithm. This reduction lets us translate streaming-space lower bounds into nearly-matching neural-space lower bounds, establishing a close connection between these two models.

    Slide:
  • Slide

(9/18) Striosomes Selectively Mediate Value-Based Learning Possibly Through Fast Spiking Inhibitory Interneurons: Sabrina Drammis, Jiajia

    Abstract:

    Learning valence-based responses to favorable and unfavorable options requires judgments of the relative value of the options and is necessary for species survival. We have found, using engineered mice, that circuit connectivity and function of the striosome compartment of the striatum are critical for this type of learning. Calcium imaging during valence-based learning exhibited a selective correlation between learning and striosomal, but not matrix, signals. This striosomal activity encoded discrimnitation learning and correlated with task engagement. Striosomal function during discrimination learning was distributed in aging, and severely so in a mouse model of Huntington's disease. Anatomical and functional connectivity of parvalbumin-positive, putative fast-spiking interneurons (FSIs) to striatal projection neurons was enhanced in striosomes compared to matrix in mice that learned. Based on our computation model, we suggest that FSIs can modulated striosomal signal-to-noise ratio, crucial for discrimination and learning.

    Slide:
  • Slide

(10/2) A Comprehensive and Predictive Agent-based Model for Collective House-Hunting in Ant Colonies: Jiajia

    Abstract:

    The decentralized cognition of animal groups is both a challenging biological problem and a potential basis for bio-inspired design. The understanding of these systems and their application can benefit from modeling and analysis of the underlying algorithms. In this study, we define a modeling framework that can be used to formally represent all components of such algorithms. As an example application of the framework, we adapt to it the much-studied house-hunting algorithm used by emigrating colonies of *Temnothorax* ants to reach consensus on a new nest. We provide a Python simulator that encodes accurate individual behavior rules and produces simulated behaviors consistent with empirical observations, on both the individual and group levels. Our model successfully reproduces experimental results showing the high cognitive capacity of colonies, their rational time investment during decision-making, and their ability to avoid and repair splits with the help of social information. We also use the model to make predictions about several unstudied aspects of emigration behavior. The results suggest the value of individual sensitivity to site population for ensuring consensus, and they indicate a more complex relationship between individual behavior and the speed/accuracy trade-off than previously appreciated. The model proved relatively weak at resolving colony divisions among multiple sites, suggesting either limits to the ants' ability to reach consensus, or an aspect of their behavior not captured in our model. It is our hope that these insights and predictions can inspire further research from both the biology and computer science community.

    Slide:
  • Slide
  • Paper:
  • Paper

(10/16) How to Color a French Flag: Biologically Inspired Algorithms for Scale-Invariant Patterning: Bertie Ancona

    Abstract:

    In the French flag problem, initially uncolored cells on a grid must differentiate to become blue, white or red. The goal is for the cells to color the grid as a French flag, i.e., a three-colored triband, in a distributed manner. To solve a generalized version of the problem in a distributed computational setting, we consider two models: a biologically-inspired version that relies on morphogens (diffusing proteins acting as chemical signals) and a more abstract version based on reliable message passing between cellular agents.

    Much of developmental biology research has focused on concentration-based approaches using morphogens, since morphogen gradients are thought to be an underlying mechanism in tissue patterning. We show that both our model types easily achieve a French ribbon - a French flag in the 1D case. However, extending the ribbon to the 2D flag in the concentration model is somewhat difficult unless each agent has additional positional information. Assuming that cells are identical, it is impossible to achieve a French flag or even a close approximation. In contrast, using a message-based approach in the 2D case only requires assuming that agents can be represented as constant-size state machines.

(11/13) Network Decomposition and Distributed Derandomization: Mohsen Ghaffari

(12/4) Locally Solvable Tasks and the Limitations of Valency Arguments: Hagit Attiya, Armando Castañeda, Sergio Rajsbaum

    Abstract:

    An elegant strategy for proving impossibility results in distributed computing was introduced in the celebrated FLP consensus impossibility proof. This strategy is local in nature as at each stage, one configuration of a hypothetical protocol for consensus is considered, together with future valencies of possible extensions. This local nature makes the strategy very flexible, and has been used in numerous situations related to consensus. Hence, one would like to use it for impossibility results of two other well-known tasks: set agreement and renaming. This paper provides an explanation of why impossibility proofs of these tasks have been of a global nature. It shows that a protocol can always solve such tasks locally, in the following sense. Given a configuration and all its future valencies, if a single successor configuration is selected, then the protocol can reveal all decisions in this branch of executions, satisfying the task specification. This result is shown for both set agreement and renaming, implying that there are no local impossibility proofs for these tasks.

  • Slide
  • Paper

(2/19) SNOW Revisited: Understanding When Ideal READ Transactions Are Possible: Kishori Konwar

    Abstract:

    READ transactions that read data distributed across servers dominate the workloads of real-world distributed storage systems. The SNOW Theorem stated that ideal READ transactions that have optimal latency and the strongest guarantees— i.e., “SNOW” READ transactions—are impossible in one specific setting that requires three or more clients: at least two readers and one writer. However, it left many open questions. We close all of these open questions with new impossibility results and new algorithms.

    First, we prove rigorously the original result saying that it is impossible to have a READ transactions system that satisfies SNOW properties with three or more clients. The insight we gained from this proof led to teasing out the implicit assumptions that are required to state the results and also, resolving the open question regarding the possibility of SNOW with two clients. We show that SNOW is possible in a multi-writer, single-reader (MWSR) setting when a client can send messages to other clients with an algorithm; on the other hand, we prove it is impossible to implement SNOW in a multi-writer, single-reader (MWSR) setting–which is more general than the two-client setting–when client-to-client communication is disallowed. We also correct the previous claim that incorrectly identified one existing system, Eiger, as supporting the strongest guarantees (SW) and whose read-only transactions had bounded latency. Thus, there were no previous algorithms that provided the strongest guarantees and had bounded latency. Finally, we introduce the first two algorithms to provide the strongest guarantees with bounded latency.

(3/12) Algorithmic insights on continual learning from fruit flies: Saket Navlakha

    Abstract:

    Continual learning in computational systems is challenging due to catastrophic forgetting. We found a two-layer circuit in the fruit fly olfactory system that helps to address these challenges by uniquely combining two common ideas: sparse coding and associative learning. In the first layer, odors are encoded using sparse, high-dimensional representations, which reduces memory interference by activating non-overlapping populations of neurons for different odors. In the second layer, only the synapses between odor-activated neurons and the odor’s associated output neuron are modified during learning; the rest of the weights are frozen to prevent unrelated memories from being overwritten. We show empirically and analytically that this simple algorithm boosts continual learning performance compared to existing algorithms. We also find that the fly’s associative learning algorithm is similar to the classic perceptron learning algorithm, albeit two modifications, which we prove are critical for reducing catastrophic forgetting.

(3/26) Thalamocortical contribution to solving credit assignment in neural systems: Brabeeba Wang

    Abstract:

    Animal brains evolved to optimize behavior in dynamically changing environments, selecting actions that maximize future rewards. A large body of experimental work indicates that such optimization changes the wiring of neural circuits, appropriately mapping environmental input onto behavioral outputs. A major unsolved scientific question is how optimal wiring adjustments, which must target the connections responsible for rewards, can be accomplished when the relation between sensory inputs, action taken, environmental context with rewards is ambiguous. The computational problem of properly targeting cues, contexts and actions that lead to reward is known as structural, contextual and temporal credit assignment respectively. In this review, we survey prior approaches to these three types of problems and advance the notion that the brain's specialized neural architectures provide efficient solutions. Within this framework, the thalamus with its cortical and basal ganglia interactions serve as a systems-level solution to credit assignment. Specifically, we propose that thalamocortical interaction is the locus of meta-learning where the thalamus provides cortical control functions that parametrize the cortical activity association space. By selecting among these control functions, the basal ganglia hierarchically guide thalamocortical plasticity across two timescales to enable meta-learning. The faster timescale establishes contextual associations to enable rapid behavioral flexibility while the slower one enables generalization to new contexts. Incorporating different thalamic control functions under this framework clarifies how thalamocortical-basal ganglia interactions may simultaneously solve the three credit assignment problems.

  • Paper
  • Slides

(4/9) Distributional RL and dopamine circuitry: Brabeeba Wang

(4/16) Dopamine representation: Sabrina Drammis

(4/23) Computational overview on different aspects of basal ganglia: Brabeeba Wang

(4/30) Laplace code for distributional RL: Brabeeba Wang

    Abstract:

    In this talk, I introduce a recent paper on a different way to encode distribution in dopamine neurons. Instead of the original quantile/expectile code, the paper proposes that by varying the discount factor, reward sensitivity and temporal discount factor, one can encode the distribution through a flexible laplace code. Furthermore, this laplace code can be read out from successor representation which is a conjectured model for hippocampus.

    Reading list:
  • Slides

(5/7) Why only us? An overview on language: Nancy Lynch

(5/14) SuperUROP presentation: Grace Cai, Emily Zhang, Keith Murray

    Abstract:

    This week, our three superUROP is giving their presentation. Grace gives a presentation on spatially dependent ant house hunting algorithm. Emily gives a presentation on an analysis on convergence of an ant house hunting algorithm. Keith gives a presentation on a task optimized CNN giving mechanistic insights on direction sensitive retinal circuit.

(5/21) Biologically motivated computational models of the brain: Lisa Yang

Accessibility