Program

All times in CEST (Berlin-Time)

   EDT (New York)     PDT (Los Angeles)       CEST (Berlin)             JST (Tokyo)

Day 1, July 10th:   8 am - 11 am 5 am -   8 am 2 pm - 5 pm           9 pm - 12 am

Day 2, July 11th:   9 am - 12 pm 6 am -   9 am 3 pm - 6 pm 10 pm - 1 am (next day)

Day 3, July 12th: 12 pm -   3 pm 9 am - 12 pm 6 pm - 9 pm   1 am - 4 am (next day)

Day 1  -  July 10th

2.00 - 2.15 pm (CEST)   

Introduction

Session 1:

2.15 - 2.55 pm (CEST)   

Ryota Kanai: The Synergy of AI Research and Neuroscience: Clarifying Concepts in Consciousness Studies 

In this presentation, we will explore the intersection of consciousness and intelligence, offering insights on how current theories of consciousness can be translated into deep learning architectures. We propose that this translation exercise enables the transformation of abstract ideas in consciousness into concrete, computational constructs, thereby bridging theoretical and applied realms (Juliani et al., 2022). Leveraging our analysis of the potential functions of consciousness (Kanai et al., 2019; Langdon et al., 2022), we make a case for consciousness as a foundation for general-purpose intelligence — defined here as the capacity to dynamically integrate existing functions. We then offer a novel interpretation of the Global Workspace Theory, wherein the global workspace is perceived as a shared latent space that binds together multimodal specialized modules (VanRullen & Kanai, 2021). By framing the shared latent space as an implementable mechanism for general-purpose intelligence, we open a new window into the understanding of consciousness. Moreover, we explore the potential implications of this shared latent space for the future of brain-to-brain communication technologies. 

Session 2:

2.55 - 3.35 pm (CEST)

Katharina Dobs: Using Artificial Neural Networks to Ask ‘Why’ Questions of Minds and Brains

Neuroscientists have long characterized the properties and functions of the nervous system, and they are increasingly succeeding in deciphering how the brain performs various tasks. However, the question of ‘why’ the brain works the way it does is less often considered, largely due to limitations in human testing. In this talk, I will argue that the new ability to optimize artificial neural networks (ANNs) for performance on human-like tasks now enables us to approach these theoretical ‘why’ questions about the brain. Specifically, when a particular behavioral or neural phenomenon spontaneously emerges in ANNs optimized for a task, it suggests this phenomenon may be a result from the brain’s optimization for that same task. I will highlight the recent success of this strategy in explaining why the human face perception system works the way it does, at both behavioral and neural levels. 

(Break)

Session 3:

3:45 - 4:25 pm (CEST)    

Blake Richards: The Other Side of the Looking Glass: Mirror Descent for Neuroscience Theory Development  

Most learning algorithms in machine learning rely on gradient descent to adjust model parameters, and a growing literature in computational neuroscience leverages these ideas to study synaptic plasticity in the brain. However, the vast majority of this work ignores a critical underlying assumption: the choice of distance function for synaptic changes (i.e. the geometry of synaptic plasticity). Gradient descent assumes that distance is Euclidean, but many other distance functions are possible, and there is no reason that biology necessarily uses Euclidean geometry. Here, using the theoretical tools provided by mirror descent, we show that, regardless of the loss being minimized, the distribution of synaptic weights will depend on the geometry of synaptic plasticity. We use these results to show that experimentally-observed log-normal weight distributions found in several brain areas are not consistent with standard gradient descent (i.e. a Euclidean geometry), but rather with non-Euclidean distances. Interestingly, one of these non-Euclidean distance functions, negative entropy, naturally respects Dale's law, i.e. the physiological principle that leads neurons to be either exclusively excitatory or inhibitory. We show that this distance function has several nice properties that align with biological considerations. Overall, this work shows that the current paradigm in theoretical work on synaptic plasticity that assumes Euclidean synaptic geometry may be misguided and that it should be possible to experimentally determine the true geometry of synaptic plasticity in the brain. 

4:25 - 5:00 pm (CEST)    

Panel Discussion

Day 2  -  July 11th

3.00 - 3.05 pm (CEST)   

Introduction

Session 4:

3.05 - 3.45 pm (CEST)   

Mayank Agrawal: Integrating Theory-Based and Data-Driven Approaches to Human Decision-Making 

The rise of big data is transforming the state of scientific research. While the traditional approach in psychology has been to collect datasets on tens or hundreds of subjects in order to evaluate a single hypothesis, Internet crowdsourcing techniques now enable the collection of millions of data points and subsequent evaluation of thousands of hypotheses. This scale offers new and untapped potential, but it also necessitates ‘big theory’: how do we make sense of all this information? In this line of work, we demonstrate how to adapt the scientific method to large-scale datasets such that we can jointly maximize the predictive and explanatory power of our computational models. We apply these ideas to classic problems such as moral judgment and risky choice. 

Session 5:

3.45 - 4.25 pm (CEST)     

Anna Ivanova: Dissociating Language and Thought in Large Language Models 

Today’s large language models (LLMs) routinely generate coherent, grammatical and seemingly meaningful paragraphs of text. This achievement has led to speculation that these models have become “thinking machines”, capable of performing tasks that require abstract knowledge and reasoning. In this talk, I will introduce a distinction between formal competence—knowledge of linguistic rules and patterns—and functional competence—understanding and using language in the world. This distinction is grounded in human neuroscience, which shows that formal and functional competence recruit different cognitive mechanisms. I will then show that the word-in-context prediction objective has allowed LLMs to essentially master formal linguistic competence; however, LLMs still lag behind at many aspects of functional linguistic competence, and their improvements often depend on additional supervised fine-tuning and/or coupling with an external reasoning module. I will conclude by discussing the value of the formal/functional competence framework for evaluating and building flexible, humanlike models of language use.

(Break)

Session 6:

4.35 - 5.15 pm (CEST)   

 Caspar J. van Lissa: Machine Learning can Advance Theory Formation in the Social Sciences 

Theories are the vehicle of cumulative knowledge acquisition. At this time, however, many social scientific theories are insufficiently precise to derive testable hypotheses. This limits the advancement of our principled understanding of development. This problem cannot be resolved by improving the way deductive (confirmatory) research is conducted (e.g., through preregistration and replication), because theory formation requires inductive (exploratory) research. In this presentation, I argue that machine learning can help advance theory formation in the social sciences, because it enables rigorous exploration of patterns in data. I will discuss specific advantages of machine learning, explain core methodological concepts, introduce relevant methods, and describe how data-driven insights are consolidated into theory. Machine learning automates exploration, and incorporates checks and balances to ensure generalizable results. It can assist in phenomenon detection and offers a more holistic understanding of the phenomena associated with an outcome or process of interest. 

5:15 - 6:00 pm (CEST)    

Panel Discussion

 

Day 3  -  July 12th

6.00 - 6.05 pm (CEST)   

Introduction

Session 7:

6.05 - 6.45 pm (CEST)   

Filiz Garip: Future of Machine Learning in Sociology 

Sociologists are increasingly turning to machine learning (ML) for data-driven discovery and predictive modeling. ML methods help us classify data, compute new measures, predict outcomes and events, make causal inferences, and collaborate within a common-task framework. Although predictive analytics has become a mainstay of public policy analysis and evaluation, the contributions of ML to theory building are less widely appreciated. ML-derived data classifications can reveal patterns that require a new theory, while predictive performance metrics can point to shortcomings of existing theory and motivate inductive theorizing. Three examples from the Mexico-U.S. migration setting illustrate the utility of ML for theorizing by (i) selecting features best capturing climate-mobility linkages, (ii) discovering the diverse groups of migrants that emerge under different contexts, and (iii) revealing the differential predictability of migration across different sending regions and time periods.  

Session 8:

6.45 - 7.25 pm (CEST)       

Rebecca Johnson: Machine Learning and the Targeting of Help in U.S. Government Bureaucracies: Promises and Perils 

How can governments use machine learning predictions to guide the allocation of scarce resources? This paper discusses two research projects, each conducted in partnership with government agencies. The first project focuses on the scarce resource of outreach worker time. We partnered with an NYC agency that sends outreach workers to learn about tenant issues with their landlords and explored the use of machine learning to improve the efficiency of worker outreach. The second project focuses on the scarce resource of government survey incentives. We partnered with a federal agency to try to predict individuals' risk of nonresponse in a federal survey and explored the use of these predictions to improve the allocation of nonresponse-focused incentives. After discussing results from each project, I highlight the promises and perils of machine learning in this context, including how machine learning intersects with theories of fair resource allocation. 

(Break)

Session 9:

7.35 - 8.15 pm (CEST)     

Justin Grimmer: Who is On the Fringe? Machine Learning, Discovery, and Theory Development with Survey Data

Finding causes is a central goal in psychological research. In this talk, I discuss the

problems and challenges in finding psychological causes from the perspective of philosophy

of science, related to (1) the ill-defined nature of most psychological constructs, (2) the

difficulties in manipulating psychological constructs and measuring them in a robust way,

and (3) failures of causal sufficiency (also known as the problem of unmeasured common

causes). I will then expand this discussion to the context of longitudinal data analysis, and

consider how these problems manifest there and what additional problems arise. Although

the conclusions will be rather pessimistic and critical I will end by discussing several

alternative approaches and ways forward.

8:15 - 9:00 pm (CEST)    

Panel Discussion