People

Members

Josh Tenenbaum

Group Leader

Josh Tenenbaum

I study the computational basis of human learning and inference. Through a combination of mathematical modeling, computer simulation, and behavioral experiments, I try to uncover the logic behind our everyday inductive leaps: constructing perceptual representations, separating “style” and “content” in perception, learning concepts and words, judging similarity or representativeness, inferring causal connections, noticing coincidences, predicting the future. I approach these topics with a range of empirical methods — primarily, behavioral testing of adults, children, and machines — and formal tools — drawn chiefly from Bayesian statistics and probability theory, but also from geometry, graph theory, and linear algebra. My work is driven by the complementary goals of trying to achieve a better understanding of human learning in computational terms and trying to build computational systems that come closer to the capacities of human learners.

Ilker Yildirim

Research Scientist

Ilker Yildirim

My research aims to reverse-engineer how causal models of the world are implemented in the mind and brain, and spans the fields of vision, multi-sensory perception, planning, and social cognition. I use a distinctive combination of tools and methods such as probabilistic generative models, video game engines, deep neural networks, quantitative psychophysics, neuroscience data from non-human primates, and increasingly also human neuroimaging.

Tomer Ullman

Postdoc

Tomer Ullman

Both scientific and intuitive psychological knowledge can be viewed as theories of domains. How are these abstract knowledge structures acquired, and how do they change as new information is discovered? I am interested in these basic questions, working on modeling both intuitive knowledge such as social interaction and scientific theories such as physics, searching for a shared cognitive architecture.

Kevin Smith

Postdoc

Kevin Smith

I am interested in how people use physical reasoning for a variety of common-sense tasks such as prediction, inferences about object properties, or action planning. To support these capacities, we all have the ability to simulate how our environment will unfold based on the physics of the world. By using a combination of psychophysics and computational modeling, I study how we are able to perform this simulation, how explicit knowledge and rules impact our simulations, and how we can draw on these predictions to make inferences about the world and plan our actions.

Sydney Levine

Postdoc

Sydney Levine

I study the cognitive mechanisms underlying the human moral conscience. In particular, I am attempting to characterize the abstract rules used by preschoolers when they make moral judgments, many of which rely on subtle differences in the mental states of agents. Describing these rules can help us understand the representations and computations that power the moral mind.

Marta Kryven

Postdoc

Marta Kryven

I study social perception and decision-making in humans using computational models and behavioral experiments. People are very good at making judgments based on incomplete and uncertain evidence, while efficiently combining expectations and sensory stimuli. I am interested in the probabilistic computation that makes social inference and long-term planning possible. How do we make choices that pay off only far in the future?
How do we make sense of actions of people whose goals and values are very different from our own?

Jun-Yan Zhu

Postdoc

Jun-Yan Zhu

My research goal is to build machines capable of understanding and recreating our visual world. My current interests include deep generative models with its applications on computer vision and computer graphics.

Max Kleiman-Weiner

Postdoc

Max Kleiman-Weiner

What unique features of cognition give rise to the sophistication and scale of human social behavior? To this end, I study social decision making and strategic reasoning in humans and machines using computational tools such as Bayesian statistics, reinforcement learning and game theory. My research focuses on rational accounts of prosocial behavior and the nature of normative concepts such as morality and fairness.

Max Siegel

Postdoc

Max S

I study in how intuitive, potentially abstract theories can support perception. More generally, I am interested in understanding how knowledge expressed in different forms (e.g. propositionally or as a set of examples) can be synthesized and applied. To investigate these questions, I build computational models and compare the results with human psychophysical experiments.

Chuang Gan

Postdoc

Chuang G

I study multimodal learning from videos. My research goal is to build a video understanding system that can recognize, interact, and understand the physical world through the multimodal integration of static imagery, motion, sound, as well as language.

Jon Malmaud

Graduate Student

JonMalmaud

Pedro Tsividis

Graduate Student

Pedro

I am interested in characterizing human curiosity and exploration with respect to their role in the construction of intuitive theories. I am especially interested in representational growth and in the way that curiosity drives the acquisition of more powerful representations, both in theory learning and problem solving. My research involves computational modeling as well as developmental and adult behavioral work.

Jiajun Wu

Graduate Student

Jiajun

My interest lies on the intersection of computer vision, machine learning, and computational cognitive science. I am particularly interested in understanding how automatic vision system can gain common sense knowledge, and how this is related to human perception.

Kelsey Allen

Graduate Student

Kelsey

How do we learn and interact meaningfully with the world? I’m interested in social learning as it applies to learning from watching others, learning about ourselves, and learning to cooperate and plan optimally in groups and pairs. I also study how the statistics of our natural world are internalized as priors for solving complex perceptual problems in the auditory domain.

Josh Rule

Graduate Student

Josh Rule (December 2016)

I am interested in understanding what knowledge of all forms is like — including facts, procedures, goals, and theory-like systems of concepts — and how it is acquired. I am particularly interested in how we develop and apply abstract concepts like colors, kinds, sets, lists, and numbers. My work centers around computational modeling informed by behavioral experiments, primarily with adults.

Kevin Ellis

Graduate Student

Kevin Ellis

I work on program induction: the problem of building AI systems that learn programs from data. This involves tools from machine learning, as well as techniques from the programming languages community, like program synthesis. More broadly I am interested in building more human-like machine learning systems.

Luke Hewitt

Graduate Student

Luke Hewitt

Andrés Campero

Graduate Student

Andrés Campero

I am interested in the interaction of symbolic probabilistic reasoning and sub-symbolic statistical learning in the attempt to understand and replicate higher level cognition in a human-cognitively meaningful way. To do so I explore models that combine structured generative frameworks like probabilistic programs, with deep learning. I care about things like compositionality and the origin of concepts.

Maxwell Nye

Graduate Student

Maxwell Nye

I am interested in studying how humans learn programs and program-like representations from very few examples. I am currently modeling one-shot learning using a combination of probabilistic programming and deep learning.

Mario Belledonne

Research Assistant

Mario Belledonne

Amir A. Soltani

Research Assistant

Amir

I am interested in building perception modules to endow AI agents with the ability to understand the physical world through different sensory inputs (vision, auditory touch) and enable the agents do efficient planning/manipulation. My long-term plan is to develop general-purpose methods that are capable of learning new concepts and efficiently composing them to build models of an environment and solve new and more complex tasks.

Sholei Croom

Lab Manager

Sholei Croom

My current research interest lies in visual perception and in particular, how visual information is used to complete simple actions such as reaching and grasping. At the Tenenbaum lab, I am hoping to merge this interest with current research developing computational models of intuitive physics. Specifically, I am curious about tool use and how we are able to extract relevant features from objects to complete various tasks.

Roksi Freeman

Lab Administrator

Roksi

Affiliates

Vikash Mansinghka

Probabilistic Computing Project

Vikash Mansinghka

Vikash Mansinghka is a research scientist at MIT, where he leads the Probabilistic Computing Project. Vikash holds S.B. degrees in Mathematics and in Computer Science from MIT, as well as an M.Eng. in Computer Science and a PhD in Computation. He also held graduate fellowships from the National Science Foundation and MIT’s Lincoln Laboratory. His PhD dissertation on natively probabilistic computation won the MIT George M. Sprowls dissertation award in computer science, and his research on the Picture probabilistic programming language won an award at CVPR. He served on DARPA’s Information Science and Technology advisory board from 2010-2012, and currently serves on the editorial boards for the Journal of Machine Learning Research and the journal Statistics and Computation. He was an advisor to Google DeepMind and has co-founded two AI-related startups, one acquired and one currently operational.

Cameron Freer

Postdoc

Cameron Freer

I am an Instructor of Pure Mathematics in the MIT Math Department working on mathematical logic, computability theory, and formalized mathematics. Some of my recent research has focused on computable probability theory, theory of stochastic computation, and their relations to the formal language Church.

Alumni

John McCoy

Assistant Professor, University of Pennsylvania

John McCoy

I am interested in the computational principles underlying judgment and decision making. I care not just about the abstract principles, but also how they are approximately implemented by people as they act in the world. Much of my current work deals with how to aggregate information from multiple individuals, including in situations where the majority may be wrong. I also think about how cognitive science might inform marketing and policy-making more broadly.

Tobias Gerstenberg

Assistant Professor, Stanford University

Tobi Gerstenberg

I am interested in causality, counterfactuals, and responsibility. In my work, I investigate the ways in which these different concepts are linked. For example, when judging whether one event caused another event to happen, people often compare what actually happened with what they think would have happened in the absence of the causal event. People make use of their intuitive understanding of a given domain, such as physics or psychology, to simulate what would have happened in the relevant counterfactual world. I show that this general counterfactual account of causal attribution can also capture people’s responsibility attributions to individuals in groups for collectively brought about outcomes.

Eliza Kosoy

PhD Student, UC Berkeley

Eliza Kozoy

I am interested in what we can learn from the process by which kids learn and how we can apply it to machine learning.

Michael Janner

PhD Student, UC Berkeley

Michael Janner

Chris Baker

Co-founder and Chief Scientist at iSee AI

Chris Baker

I’m broadly interested in social cognition and more specifically in theory of mind. Social cognition describes how people reason about interpersonal situations and interact with other people; theory of mind describes people’s reasoning about the mental states of others, such as beliefs, desires and emotions. My research uses insights from psychology and philosophy to guide the development of computational models of people’s intuitive theories of other people. These models are implemented with technology from artificial intelligence and machine learning, and tested empirically using behavioral experiments.

Eyal Dechter

Engineer, Calico

Eyal

I am interested in a variety of topics at the intersection of cognitive science, machine learning, and artificial intelligence. I want to better understand how theories and concepts about the world are acquired and represented, how we can characterize the minimal requirements for acquiring them, and how they are used for reasoning.

Michael Chang

PhD Student, UC Berkeley

http://mbchang.github.io/

I am interested in the inductive biases and algorithmic constraints that guide learning agents to learn to develop their own languages for representing problems and modeling their world.

Peter Krafft

Postdoc, University of Washington

Peter Krafft

I am interested in understanding collective behavior. That is, I think about how group behavior is determined by the qualities and interactions of individual group members. Agent-based modeling is currently the main class of formal models I use to study social systems. Agent-based models are nice because they relate underlying psychology to aggregate social behavior.

Timothy O’Donnell

Assistant Professor, McGill University

Timothy O'Donnell

In my research, I develop mathematical models of the way children learn language and the way adults generalize linguistic rules to create new words and sentences. My research draws on experimental methods from psychology, formal modeling techniques from natural language processing, theoretical tools from linguistics, and problems from all three.

Yibiao Zhao

Co-founder and CEO at iSee AI

Yibiao Zhao

Yibiao was a Posdoc in the lab. He obtained his PhD from the Center for Vision, Cognition, Learning, and Autonomy lab at UCLA in 2015.

Julian Jara Ettinger

Asisstant professor, Yale university

“Julian

I study the fundamental representations and computations that underlie our ability to navigate the social and physical world. To date, much of my work specifically looks at how we represent and reason about other people’s minds and on how we infer what they know, think, and want.

Tejas D Kulkarni

Google DeepMind

Tejas

I am broadly interested in machine learning, computational statistics and computer vision. My current research is at the intersection of probabilistic inference, discriminative learning techniques and computational vision. To this end, I have been trying to push the idea of framing problems in computer vision in the context of inverse graphics (a.k.a graphics programming). Generative models provide the flexibility to model complex structure in the world but inference is often intractable or slow. Along with general purpose MCMC techniques, I am interested in exploring fast discriminative techniques to speed up inference in probabilistic models with complex structured output spaces. I have also been working extensively to combine ideas from variational methods and particle filtering to speed up inferences in time series models.

Jonathan Huggins

Graduate Student

Jonathan H

My primary interests are in Bayesian machine learning and AI, with a focus on Bayesian nonparametrics and probabilistic programming. I am particularly interested in approximate Bayesian inference methods with provable guarantees. My research includes the analysis of existing inference algorithms as well as the development of novel ones. I believe that probabilistic programming has the potential to be a language in which to describe expressive classes of generative models which admit tractable algorithms with sufficiently strong approximation guarantees.

David Reshef

David Reshef

I am broadly interested in the areas of machine learning, statistical inference, and information theory. My work focuses on developing tools for identifying structure in high-dimensional datasets using techniques from these fields.

Roger Grosse

Assistant Professor, University of Toronto

Roger Grosse

Brenden Lake

Assistant Professor, New York University.

Brenden Lake

I study cognition through behavioral experiments and computational models. Currently, I am researching how people learn perceptual categories, such as the names of animals or the speech sounds of their native language.

Peter Battaglia

Research Scientist, Google DeepMind

Peter Battaglia

I am broadly interested in biological and machine perception, perceptually-guided motor control, and machine learning. I am specifically focused on probabilistic models of human spatial perception, reaching behavior, causal event perception, and low-resolution scene recognition. My goal is to develop formal models of how people draw sophisticated perceptual interpretations and produce precise, robust actions.

Leon Bergen

Assistant Professor, UCSD

Leon Bergen

I study language using computational modeling and experimental methods. My research has focused on pragmatic reasoning and the acquisition of syntactic knowledge.

Joshua Hartshorne

Assistant Professor, Boston College

“Josh

Language allows someone to take an idea in their mind, bundle it up, and transmit it through some medium, where it will be received by another person and unpacked into (something close to) the original idea. The linguistic signal is complex: some of the information is transmitted through the words, some through the syntactic structure, some through non-linguistic cues like gesture, some through less-linguistic cues like intonation, and some is not even encoded at all but the speaker has some reasonable expectation that the listener can infer it. My research is focused on understanding what information is transmitted via what mechanism, and how.

Sam Gershman

Assistant Professor, Harvard University

Sam Gershman

I am interested in the intersection of cognition, neuroscience, and artificial intelligence. My current work examines how the brain acquires structured representations of the world in a variety of domains, including motion perception, reinforcement learning, and semantic cognition.

Andreas Stuhlmüller

Postdoc, Stanford

Andreas Stuhlmueller

I want to understand how to represent, learn, and reason with structured knowledge. I work on probabilistic programming as a means of knowledge representation, and probabilistic inference as a method of machine learning and reasoning. I am broadly interested in topics in cognitive science and artificial intelligence that contribute to this project, including concept learning, theory of mind, game theory, and decision theory.

Jess Hamrick

Google DeepMind

Jess Hamrick

My interests lie at the intersection between cognitive science and artificial intelligence. Broadly, I want to understand how people integrate perception and reasoning to understand the world around them. I am particularly interested in computationally specifying the algorithms underlying reasoning, as well as the knowledge representations accessed by those algorithms, by drawing on methods from machine learning and statistics. I have most recently focused on how people reason about every day physical events, modeling this “intuitive physics” through knowledge-rich simulations consistent with Newtonian physics (copied from Tom Griffiths’ lab website at UC Berkeley).

David Wingate

Assistant Professor, Brigham Young University

David Wingate

My research interests lie at the intersection of perception, control and cognition, and how all three have synergistic effects on learning. Specific interests include reinforcement learning, unsupervised learning of useful knowledge representations (including predictive representations of state and structured nonparametric Bayesian distributions), information theory, manifold learning, kernel methods, massively parallel processing, visual perception, and optimal control.

Katherine Heller

Assistant Professor, Duke

Katherine Heller

I am interested in developing new statistical methods, using hierarchical and nonparametric Bayesian models, for extracting useful information from data when little or no supervision is available. I aim to use these methods to model human behavior, including categorization, and social interactions in online environments.

Ruslan Salakhutdinov

UPMC professor, Carnegie Mellon University

Ruslan Salakhutdinov

My broad research interests involve developing learning and inference algorithms for probabilistic hierarchical models that contain many layers of nonlinear processing. Some of my recent work has concentrated on the theoretical analysis and learning of Deep Boltzmann Machines, with applications to information retrieval, visual object recognition, and nonlinear dimensionality reduction. My other interests include Bayesian inference, transfer learning, matrix factorization, and approximate inference and learning of large scale graphical models.

Dan Roy

Assistant Professor, University of Toronto

Dan Roy

My research interests lie at the intersection of computer science, statistics and probability theory; I study probabilistic programming languages to develop computational perspectives on fundamental ideas in probability theory and statistics. I am particularly interested in the use of recursion to define nonparametric distributions on data structures; representation theorems that connect computability and probabilistic structures; the complexity of inference; and the limitations of probabilistic computation and other questions in computable probability theory.

Steven Piantadosi

Assistant Professor, University of Rochester

Steven Piantadosi

I am interested in how people use probabilistic inference to acquire and process interestingly structured information. I am also interested in information theory, theory of computation, philosophy of mind, and symbolic dynamics.

Virginia Savova

Postdoc, Broad Institute

Virginia Savova

My research centers on language as a symbolic communication system that makes infinite use of finite means. I believe that the question how such a system is represented and implemented in the brain is fundamental to cognitive science. In the past, I have employed different methods for studying this question — from structural descriptions of syntactic phenomena, to Bayesian models and reaction-time experiments.

Noah Goodman

Assistant Professor, Stanford

Noah Goodman

i approach the study of mind with a combination of formal (mathematical) analysis, philosophical orientation, and empirical grounding. my research focusses on concepts and causality: what is the nature of causal and conceptual knowledge? how do we acquire this knowledge, and how do we use it?

Michael Frank

Associate Professor, Stanford

Michael Frank

In order to communicate successfully, children acquiring a language have to learn to segment words from continuous speech, learn the meanings of those words, and figure out how to put them together to make coherent sentences. I’m interested in all three of these problems, and I study them using artificial language learning experiments with adults and infants as well as probabilistic computational models. I work jointly with Josh Tenenbaum and Ted Gibson.

Ed Vul

Assistant Professor, UCSD

Ed Vul

Without constraints and assumptions, it is impossible to figure out what sorts of stuff in the physical world caused our retinal input. I am primarily interested in the priors and structures our visual system uses to to solve this problem given limited resources. To this end, I study adaptation, attention, and other visual processes with psychophysics, computational methods, and fMRI.

Frank Jäkel

Assistant Professor, University of Osnabrück

Frank Jaekel

Without the ability to form concepts and categorize accordingly the world would appear to be a chaotic place. How do we learn new concepts? How are concepts represented in memory? How are concepts related to the world? I’m trying to address these questions by combining insights from machine learning and cognitive psychology.

Liz Bonawitz

Assistant Professor, Rutgers University Newark

Liz Bonawitz

I started as Josh Tenenbaum’s lab coordinator in July of 2002. Now I’m a graduate student at MIT working jointly with Professor Laura Schulz in the Early Childhood Cognition Lab and with Professor Josh Tenenbaum. I’m interested in the development of human causal reasoning from infancy to adulthood and in the computational models that may shed insight on that process.

Yarden Katz

Departmental Fellow in Systems Biology, Harvard Medical School

Yarden Katz

I’d like to explore how ideas from Bayesian statistics and machine learning could help us interpret and systematically analyze neural and biological data.

Lauren Schmidt

Product Manager, Google

Lauren Schmidt

I’m interested in how people learn the meanings of words and infer relationships between words or between concepts. I’m also interested in how learning language can influence conceptual structure and development. One of my recent projects looks at how people can understand what is sensible (but possible, extremely rare, or not occurring in nature, like a blue banana) and what is nonsense (like an hour-long banana) based on the limited evidence of what actually occurs in the world.

Charles Kemp

Associate Professor, Carnegie Mellon University

Charles Kemp

At some level, semantic representations are mathematical objects. I’m interested in finding a small set of structures and operations that can be composed to build these objects. I also enjoy thinking about formal models of social systems.

Amy Perfors

Associate Professor, University of Melbourne

Amy Perfors

I’m interested in applying Bayesian models to aspects of cognitive development, in particular to issues of learnability. What biases must children have in order to acquire knowledge in different domains (syntax, word and feature learning, understanding of kinds)? To what extent are these biases domain-general?

Kobi Gal

Assistant Professor, Ben-Gurion University of the Negev

Kobi Gal

How can computers learn to make good decisions in groups comprising both people and other computer agents? I study this question by developing representations and algorithms for learning the social factors that affect people’s decision-making in a variety of domains, such as negotiation, intelligent tutors and game playing.

Brian Milch

Google

Brian Milch

I’m interested in understanding how anything made of non-intelligent parts could behave as intelligently as a human being. More specifically, I develop models and inference algorithms that combine the ability of probability theory to quantify uncertainty, with the ability of first-order logic to efficiently describe large sets of related objects. I’m a post-doc in Leslie Kaelbling’s group at CSAIL, but I also collaborate with Josh Tenenbaum’s group.

Patrick Shafto

Associate Professor, Rutgers University – Newark

Patrick Shafto

I am interested in the kinds of things that make people seem clever — especially the ability to make robust, flexible, & reliable inferences from limited data. Particular areas of recent interest are learning multiple ways of organizing knowledge in a domain, flexible use of background knowledge to support inferences in different contexts, and learning in pedagogical & communicative settings. I try to understand these abilities through formal mathematical analyses and behavioral experiments. The goal of my research is to try to develop an understanding of people’s “real-world” reasoning.

Tevye Rachelson Krynski

Engineering Manager, Leanplum

Tevye Rachelson Krynski

I am interested in using cognitive models of human belief and causality reasoning to understand psychological phenomena such as base rate neglect. The ultimate goal of my research would be not just a computational model of human cognition, but also a method for developing AI systems that think like people. I got my PhD in BCS in 2006.

Konrad Koerding

Professor, Northwestern

Konrad Koerding

I did computational and cognitive neuroscience in the group of Josh Tenenbaum, BCS, MIT. (previously with Daniel Wolpert and Peter Konig). I specialize in modelling and movement psychophysics. I expect my theories to make experimental predictions, provide a compact description of data and lead to computationally strong algorithms. My experiments should falsify theories.

Tom Griffiths

Professor, UC Berkeley

Tom Griffiths

My research interests are developing computational models of higher level cognition. In particular, I’m interested in developing rational accounts of cognition using probabilistic generative models and Bayesian statistics. My current areas of interest are understanding people’s everyday inductive leaps – difficult inductive problems we solve every day, like predicting the future, learning causal relationships, and noticing coincidences – and the interface between psychology and machine learning in developing statistical models of language.

Mark Steyvers

Professor, UC Irvine

Mark Steyvers

My research interests span a diverse set of topics in cognitive science such as episodic and semantic memory, dynamic decision making, and causal reasoning. In each of these areas, I combine mathematical and computational modeling with behavioral experiments. The models and experiments are tightly coupled: I try to formulate empirical questions with the goals of constraining, developing, or testing between alternative computational models of how people learn, process, and represent information. My research interests also include some computer science topics in the domain of statistical machine learning and information retrieval. The adoption of recent machine learning methodology is useful in advancing cognitive science research, especially in the area of semantic memory.

Sean Stromsten

BEA Systems

Sean Stromsten

I’m interested in some of the conceptual groundwork that might someday make psychology more coherent — how we (or anything) can weave a web of concepts to explain experience. More technically, I’m interested in how to extend the range of probabilistic models of induction to richer descriptions of experience than those available to traditional probabilistic models. I’m also interested in the form of the associations of words with conceptual formulae, how we learn those associations, and how we use them to invoke thoughts in each other and ourselves.

Neville Sanjana

Sanjana Lab

Neville Sanjana

My research in the Tenenbaum lab is focused on exploring computational models of how humans learn and generalize through inductive inference. My undergraduate honors thesis analyzed how a computer might construct a hypothesis space (i.e. candidate guesses about the concept to be learned) that could match human generalization performance. In that work, I examined several different unsupervised learning techniques to build a hypothesis space for a Bayesian concept learner. Recently, I have been looking at how humans seem to be using, on an abstract level, taxonomic, tree-based models of similarity to guide their generalization. In my other lab, I am working on understanding the dynamics underlying neural computations in small networks of neurons.

Rebecca Saxe

Professor, MIT

Rebecca Saxe

I study the neural and psychological basis of social cognition. Do we have dedicated mechanisms for recognising and/or reasoning about other minds? How and why does the human brain succeed so easily where computers and logicians fail? Addressing these questions, my work spans the disciplines of cognitive neuroscience, developmental psychology, social psychology, computational modelling and philosophy.

Ronnie Bryan

PhD student, Caltech

Ronnie Bryan

I graduated from the Brain and Cognitive Science Department at MIT doing a UROP with Professor Tenenbaum. My personal research interests for the future include modeling human social cognition.

Anne Chin

Anne Chin

I graduated from the Brain and Cognitive Science Department at MIT doing a UROP with Professor Tenenbaum. I am primarily interested in concept learning and conceptual change during learning and development.

Carrie Niziolek

PhD student

Carrie Niziolek

I graduated from Brain and Cog Sci, only to stick around MIT in the Speech and Hearing Bioscience and Technology program. (I’m a cognitive scientist at heart, of course.) I’m interested in language — both its evolution as a communication system and its purpose as an internal evoker of mental representations — and how speech sounds might act as the elementary units of those representations.

George Marzloff

George Marzloff

I graduated MIT (majoring in BCS & minoring in music.) My research interest lies in searching for the fundamental ways humans generalize and understanding how they interpret novel ideas.