Anand Gopalakrishnan


Hi! I’m a PhD student with Prof. Jürgen Schmidhuber at The Swiss AI Lab (IDSIA). I’m broadly interested in the areas of unsupervised learning and deep learning. My long-term research goal is to design artificial agents that match the human-level capacity to build mental models composed of basic concepts such as objects, actions, space etc. which are learned from perceptual inputs. Learning such an idealized mental model in essence is to “invert” the generative process, that is to infer the underlying causal mechanisms given observations. These mental models could facilitate such agents to generalize beyond their direct experience through simulation of plausible yet unobserved outcomes (i.e. relevant interventions and counterfactuals). They also allow for reasoning about the world in terms of cause and effect in a more human-like manner. Towards this end, my current research is focussed on capturing meaningful discrete units (e.g. objects, skills, events etc.) within neural networks from perceptual inputs.

Before this, I received my Master’s degree in Electrical Engineering from Pennsylvania State University. My master’s thesis was on generative modelling of human motion, which I pursued with the guidance of Prof. C. Lee Giles. In my undergraduate days at National Insitute of Technology - Karnataka (NITK), I was interested in signal processing specifically on the domains of images and speech.


Sep 26, 2023 Recent work on synchrony-based models for visual binding accepted to NeurIPS 2023. pre-print.
Aug 25, 2023 Attended the MIT Center for Brains, Minds and Machines (CBMM) summer course 2023.
Oct 30, 2022 Recent work on unsupervised learning of temporal abstractions has been accepted at Neural Computation journal. Check it out here.
Jun 15, 2022 Excited to start as a Research Intern at Amazon AWS AI Labs - Tübingen with the Causal Representation Learning team.
Jul 17, 2020 Work on unsupervised keypoint discovery received a spotlight presentation at the ICML 2020 workshop on Object-Oriented Learning. An extended version of this work accepted at ICLR 2021 as a spotlight presentation.

selected publications

  1. Contrastive Training of Complex-Valued Autoencoders for Object Discovery
    Stanić*, Aleksandar, Gopalakrishnan*, Anand, Irie, Kazuki, and Schmidhuber, Jürgen
    In NeurIPS 2023
  2. Unsupervised Learning of Temporal Abstractions With Slot-Based Transformers
    Gopalakrishnan, Anand, Irie, Kazuki, Schmidhuber, Jürgen, and Steenkiste, Sjoerd
    Neural Computation 2023
  3. Unsupervised Object Keypoint Learning using Local Spatial Predictability
    Gopalakrishnan, Anand, Steenkiste, Sjoerd, and Schmidhuber, Jürgen
    In ICLR 2021