Cosyne 2026

Single consolidated note for the only Cosyne attended so far.

Conference leads and follow-ups

  1. Joshua Dudman — dopamine learning signal

  2. Francois Rivest (Canada) — timing using DDM

  3. Exponential and accumulation are same?

  4. Uchida — TD value calculation circuit

  5. Michael Leprori — Brown University

    • Contravariance principle
    • Harder tasks → more unique solution
  6. Cell learning / single-cell “working memory”

  7. Iain M. Banks

    • Culture series
    • Note: often referred to informally here as “Ian Banks”
  8. Halstead complexity

  9. Barlow vs Hebb

    • Find/reference the relevant paper

COSYNE 2024 talks

  • Playlist: https://www.youtube.com/playlist?list=PL9YzmV9joj3EjkmmUEodJNDq9ekI7iFjq
  • Session 3
    • What is intelligence — life and prediction are safe. How order emerges out of chaos.
    • Estrogen regulates dopamine and enhances learning by suppressing re-uptake of dopamine.
    • Ching Fang, Abbott’s lab — adding auxiliary loss helps better learning; multi-region modelling using Deep RL.
    • Christopher Zimmerman — learning from events that happened well in the past (hours before); shows how a brain region is identified.
    • Neural coding.
    • Srjan Osdac — geometry of responses in IC, A1; how manifolds (PC-1,2,3) change with time (100 ms blocks).

Interpretability

  • Belief dynamics
  • Rabbit hull paper
    • Task geometry paper
  • Marginal value theorem
    • For foraging; visiting time is proportional to reward rate
  • Mechanical problem solving in mice
    • Task where mice have to do lever presses / slide things to get reward
    • Potentially a good paper for compositionality
  • Dragon king theory
  • BBP phase transition
    • The BBP phase transition (named after Jinho Baik, Gérard Ben Arous, and Sandrine Péché) describes a phenomenon in Random Matrix Theory where the largest eigenvalue of a “spiked” random matrix suddenly detaches from the main bulk of eigenvalues once the strength of a signal exceeds a critical threshold.
    • Usage note: neural network training — analyze the Hessian (curvature) of the loss landscape at initialization to see whether a gradient-based method can “find” the signal.