I study the computational basis of human learning and inference. Through a combination of mathematical modeling, computer simulation, and behavioral experiments, I try to uncover the logic behind our everyday inductive leaps: constructing perceptual representations, separating “style” and “content” in perception, learning concepts and words, judging similarity or representativeness, inferring causal connections, noticing coincidences, predicting the future. I approach these topics with a range of empirical methods — primarily, behavioral testing of adults, children, and machines — and formal tools — drawn chiefly from Bayesian statistics and probability theory, but also from geometry, graph theory, and linear algebra. My work is driven by the complementary goals of trying to achieve a better understanding of human learning in computational terms and trying to build computational systems that come closer to the capacities of human learners.
I am interested in how people use physical reasoning for a variety of common-sense tasks such as prediction, inferences about object properties, or action planning. To support these capacities, we all have the ability to simulate how our environment will unfold based on the physics of the world. By using a combination of psychophysics and computational modeling, I study how we are able to perform this simulation, how explicit knowledge and rules impact our simulations, and how we can draw on these predictions to make inferences about the world and plan our actions.
I study the cognitive mechanisms underlying the human moral conscience. In particular, I am attempting to characterize the abstract rules used by preschoolers when they make moral judgments, many of which rely on subtle differences in the mental states of agents. Describing these rules can help us understand the representations and computations that power the moral mind.
I study social perception and decision-making in humans using computational models and behavioral experiments. People are very good at making judgments based on incomplete and uncertain evidence, while efficiently combining expectations and sensory stimuli. I am interested in the probabilistic computation that makes social inference and long-term planning possible. How do we make choices that pay off only far in the future?
How do we make sense of actions of people whose goals and values are very different from our own?
I study in how intuitive, potentially abstract theories can support perception. More generally, I am interested in understanding how knowledge expressed in different forms (e.g. propositionally or as a set of examples) can be synthesized and applied. To investigate these questions, I build computational models and compare the results with human psychophysical experiments.
I work on social perception and multi-agent systems. In particular, I am interested in building computational models that have human-like abilities to reason about the mental states of others and applying those abilities to multi-agent systems to solve problems such as collaboration, communication, and teaching. I am also interested in using computational tools to study human perception of animacy (e.g., the integration of intuitive physics and intuitive psychology).
I work on probabilistic programming, generative modeling and amortized inference. I have mainly looked at this through the lens of training deep neural networks to speed up sampling algorithms such as importance sampling and sequential Monte Carlo. Using these tools, I want to build systems that can efficiently learn concepts, understand scenes and agent behavior by drawing on research in computational cognitive science.
I am interested in the underlying foundations of objecthood, which is one of the most basic building blocks of our cognition. By combining behavioral, electrophysiological (ERPs), and computational methods, I hope to understand the ongoing dynamics of how humans – adult and infants – track objects and understand them.
My research interest lies in neural scene representations – the way neural networks learn to represent information on our world. My goal is to allow independent agents to reason about our world given visual observations, such as inferring a complete model of a scene with information on geometry, material, lighting etc. from only few observations, a task that is simple for humans, but currently impossible for AI.
My research synthesises machine learning and cognitive science to develop learning systems which are more flexible, interpretable, and human-like. This goal is best realised by models which embody explicit structured relationships between parts: compositionality, hierarchy, causality. Given this perspective, I aim to combine probabilistic programming (for its rich knowledge representation) with deep learning and evolutionary computation (for tractable search and inference).
I am interested in the interaction of symbolic probabilistic reasoning and sub-symbolic statistical learning in the attempt to understand and replicate higher level cognition in a human-cognitively meaningful way. To do so I explore models that combine structured generative frameworks like probabilistic programs, with deep learning. I care about things like compositionality and the origin of concepts.
I study program learning. Specifically, I work on the idea that learning programs from data could provide a way to develop machines which possess more human-like intelligence. My research has two aims: 1) Developing novel methods for synthesizing programs, combining deep learning and symbolic techniques. 2) Applying program synthesis to build human-like AI models to solve concept learning and reasoning tasks.
I am interested in how computers can understand language as flexibly as people do, and how people do it in the first place. My research uses program synthesis, planning, and physical simulation to study how language is learned and understood across different world and linguistic contexts.
My work focuses on how children’s rich theories of the world and sophisticated mental simulations are used to support action. Key areas of interest include planning, tool use, and spatial reasoning.
I work at the intersection of Bayesian Theory of Mind and AI value alignment, using probabilistic programming to model and infer the latent hierarchical structure of human motivations so as to build AI that better understands human goals and values.
I would one day like to have a robot that lives in my house and does all my chores for me. To that end, I study the intersection of planning and learning, especially in object-oriented and relational settings. My research uses techniques from classical planning, task and motion planning, program synthesis, reinforcement learning, and machine learning and sometimes takes inspiration from cognitive science.
Suhyoun (Essie) Yu
I want to understand human decision making; how and why do we choose one option over the others? We don’t always have the luxury to do careful evaluation of the likelihood and value of an outcome, and indeed we are often inclined to take suboptimal choices. I am starting this investigation by modeling decision making in spatial tasks with Prospect Theory principles and hope to expand to other natural tasks and look into other possible explanations. Eventually, I want to implement findings from this investigation in complex dynamical control systems that heavily interact with humans.
I am interested in machines that learn to discover the structure of our world. At the same time, I take inspiration from methods in Mathematics and the Physical Sciences to build better inductive biases for ML methods. To accomplish these goals my research leverages techniques from meta-learning, learning to search, and program synthesis.
Lab Manager/Research Assistant
I’m interested in building models of memory, attention, and reasoning in humans and animals.
Probabilistic Computing Project
Vikash Mansinghka is a research scientist at MIT, where he leads the Probabilistic Computing Project. Vikash holds S.B. degrees in Mathematics and in Computer Science from MIT, as well as an M.Eng. in Computer Science and a PhD in Computation. He also held graduate fellowships from the National Science Foundation and MIT’s Lincoln Laboratory. His PhD dissertation on natively probabilistic computation won the MIT George M. Sprowls dissertation award in computer science, and his research on the Picture probabilistic programming language won an award at CVPR. He served on DARPA’s Information Science and Technology advisory board from 2010-2012, and currently serves on the editorial boards for the Journal of Machine Learning Research and the journal Statistics and Computation. He was an advisor to Google DeepMind and has co-founded two AI-related startups, one acquired and one currently operational.
Researcher/MIT-IBM Watson AI Lab
I study multimodal learning from videos. My research goal is to build a video understanding system that can recognize, interact, and understand the physical world through the multimodal integration of static imagery, motion, sound, as well as language.
I study how humans and machines can perceive faces and shapes in general. In particular, I am interested in the combination of visual and haptic input, the developmental trajectory of face perception as well as perception under partial or full occlusion. As a computational model for faces, I choose to focus on 3D Morphable Models as generative representation since they naturally disentangle the underlying variables and inverse rendering as inference strategy.
Phd Student, Yale
I am interested in probabilistic models of shape perception and intuitive physics, and more specifically in what features of the environment make the inference of useful information tractable.
I am interested in cognitive science, animal cognition, robotics, and AI. I want to understand how humans generalize so well from such little data, and build machines to be equally flexible. Specifically I am mostly focused on the interactions between predictive representations and planning (particularly in the domain of tool use)
PhD Student, Brown University
I want to endow AI agents with the ability to ground concepts that are useful for [novel] downstream tasks. To achieve this, I develop algorithms that allow AI agents also learn how to map their sensory inputs to sets of concepts whose composition can explain novel sensory inputs. My long-term goal is to develop AI methods that enable AI agents to have the ability to give rise to new concepts by thinking and explain phenomena that they have not been exposed to. In my research, I draw inspirations from cognitive science, psychology and philosophy.
Common Sense Machines
I am interested in characterizing human curiosity and exploration with respect to their role in the construction of intuitive theories. I am especially interested in representational growth and in the way that curiosity drives the acquisition of more powerful representations, both in theory learning and problem solving. My research involves computational modeling as well as developmental and adult behavioral work.
Postdoc – UC Berkeley
I am interested in understanding what knowledge of all forms is like — including facts, procedures, goals, and theory-like systems of concepts — and how it is acquired. I am particularly interested in how we develop and apply abstract concepts like colors, kinds, sets, lists, and numbers. My work centers around computational modeling informed by behavioral experiments, primarily with adults.
PhD Student, John Hopkins University
My current research interest lies in visual perception and in particular, how visual information is used to complete simple actions such as reaching and grasping. At the Tenenbaum lab, I am hoping to merge this interest with current research developing computational models of intuitive physics. Specifically, I am curious about tool use and how we are able to extract relevant features from objects to complete various tasks.
Assistant Professor, Cornell University
What would it take to build a machine that can learn, reason, and perceive as flexibly and efficiently as a human? I investigate the hypothesis that at least part of the answer to this question involves program induction. Program induction systems represent knowledge in the form of symbolic code, and treat learning as a kind of program synthesis.
Common Sense Machines
What unique features of cognition give rise to the sophistication and scale of human social behavior? To this end, I study social decision making and strategic reasoning in humans and machines using computational tools such as Bayesian statistics, reinforcement learning and game theory. My research focuses on rational accounts of prosocial behavior and the nature of normative concepts such as morality and fairness.
Applied Scientist and Engineer at Google Mountain View
Postdoc – Rockefeller University
I am interested in how we generalize from prior experience to solve novel problems, in particular how we efficiently structure and repurpose prior knowledge to generate solutions to novel problems. I am especially interested in neural mechanisms. I am co-advised by Winrich Freiwald at Rockefeller University.
Assistant Professor, Stanford University
My interest lies on the intersection of computer vision, machine learning, and computational cognitive science. I am particularly interested in understanding how automatic vision system can gain common sense knowledge, and how this is related to human perception.
PhD Student, Yale University
Assistant Professor, CMU
My research goal is to build machines capable of understanding and recreating our visual world. My current interests include deep generative models with its applications on computer vision and computer graphics.
Assistant Professor, Yale University
My research aims to reverse-engineer how causal models of the world are implemented in the mind and brain, and spans the fields of vision, multi-sensory perception, planning, and social cognition. I use a distinctive combination of tools and methods such as probabilistic generative models, video game engines, deep neural networks, quantitative psychophysics, neuroscience data from non-human primates, and increasingly also human neuroimaging.
Assistant Professor, Harvard University
Both scientific and intuitive psychological knowledge can be viewed as theories of domains. How are these abstract knowledge structures acquired, and how do they change as new information is discovered? I am interested in these basic questions, working on modeling both intuitive knowledge such as social interaction and scientific theories such as physics, searching for a shared cognitive architecture.
My interest lies in how machines can augment human capacities with probabilistic reasoning, computational models, and machine learning. I am especially interested in building the systems that understand unique characteristics of each person and personalize their actions for the individuals with different properties through the interactions.
Assistant Professor, University of Pennsylvania
I am interested in the computational principles underlying judgment and decision making. I care not just about the abstract principles, but also how they are approximately implemented by people as they act in the world. Much of my current work deals with how to aggregate information from multiple individuals, including in situations where the majority may be wrong. I also think about how cognitive science might inform marketing and policy-making more broadly.
Assistant Professor, Stanford University
I am interested in causality, counterfactuals, and responsibility. In my work, I investigate the ways in which these different concepts are linked. For example, when judging whether one event caused another event to happen, people often compare what actually happened with what they think would have happened in the absence of the causal event. People make use of their intuitive understanding of a given domain, such as physics or psychology, to simulate what would have happened in the relevant counterfactual world. I show that this general counterfactual account of causal attribution can also capture people’s responsibility attributions to individuals in groups for collectively brought about outcomes.
PhD Student, UC Berkeley
I am interested in what we can learn from the process by which kids learn and how we can apply it to machine learning.
PhD Student, UC Berkeley
Co-founder and Chief Scientist at iSee AI
I’m broadly interested in social cognition and more specifically in theory of mind. Social cognition describes how people reason about interpersonal situations and interact with other people; theory of mind describes people’s reasoning about the mental states of others, such as beliefs, desires and emotions. My research uses insights from psychology and philosophy to guide the development of computational models of people’s intuitive theories of other people. These models are implemented with technology from artificial intelligence and machine learning, and tested empirically using behavioral experiments.
I am interested in a variety of topics at the intersection of cognitive science, machine learning, and artificial intelligence. I want to better understand how theories and concepts about the world are acquired and represented, how we can characterize the minimal requirements for acquiring them, and how they are used for reasoning.
PhD Student, UC Berkeley
I am interested in the inductive biases and algorithmic constraints that guide learning agents to learn to develop their own languages for representing problems and modeling their world.
Senior Research Fellow, University of Oxford
I am interested in understanding collective behavior. That is, I think about how group behavior is determined by the qualities and interactions of individual group members. Agent-based modeling is currently the main class of formal models I use to study social systems. Agent-based models are nice because they relate underlying psychology to aggregate social behavior.
Assistant Professor, McGill University
In my research, I develop mathematical models of the way children learn language and the way adults generalize linguistic rules to create new words and sentences. My research draws on experimental methods from psychology, formal modeling techniques from natural language processing, theoretical tools from linguistics, and problems from all three.
Co-founder and CEO at iSee AI
Yibiao was a Posdoc in the lab. He obtained his PhD from the Center for Vision, Cognition, Learning, and Autonomy lab at UCLA in 2015.
Asisstant professor, Yale university
I study the fundamental representations and computations that underlie our ability to navigate the social and physical world. To date, much of my work specifically looks at how we represent and reason about other people’s minds and on how we infer what they know, think, and want.
Common Sense Machines
I am broadly interested in machine learning, computational statistics and computer vision. My current research is at the intersection of probabilistic inference, discriminative learning techniques and computational vision. To this end, I have been trying to push the idea of framing problems in computer vision in the context of inverse graphics (a.k.a graphics programming). Generative models provide the flexibility to model complex structure in the world but inference is often intractable or slow. Along with general purpose MCMC techniques, I am interested in exploring fast discriminative techniques to speed up inference in probabilistic models with complex structured output spaces. I have also been working extensively to combine ideas from variational methods and particle filtering to speed up inferences in time series models.
My primary interests are in Bayesian machine learning and AI, with a focus on Bayesian nonparametrics and probabilistic programming. I am particularly interested in approximate Bayesian inference methods with provable guarantees. My research includes the analysis of existing inference algorithms as well as the development of novel ones. I believe that probabilistic programming has the potential to be a language in which to describe expressive classes of generative models which admit tractable algorithms with sufficiently strong approximation guarantees.
I am broadly interested in the areas of machine learning, statistical inference, and information theory. My work focuses on developing tools for identifying structure in high-dimensional datasets using techniques from these fields.
Researcher, Probabilistic Computing Project
My background is in mathematical logic and computability theory, and I’m interested in computable probability theory and the theory of stochastic computation.
Assistant Professor, University of Toronto
Assistant Professor, New York University.
I study cognition through behavioral experiments and computational models. Currently, I am researching how people learn perceptual categories, such as the names of animals or the speech sounds of their native language.
Research Scientist, Google DeepMind
I am broadly interested in biological and machine perception, perceptually-guided motor control, and machine learning. I am specifically focused on probabilistic models of human spatial perception, reaching behavior, causal event perception, and low-resolution scene recognition. My goal is to develop formal models of how people draw sophisticated perceptual interpretations and produce precise, robust actions.
Assistant Professor, UCSD
I study language using computational modeling and experimental methods. My research has focused on pragmatic reasoning and the acquisition of syntactic knowledge.
Assistant Professor, Boston College
Language allows someone to take an idea in their mind, bundle it up, and transmit it through some medium, where it will be received by another person and unpacked into (something close to) the original idea. The linguistic signal is complex: some of the information is transmitted through the words, some through the syntactic structure, some through non-linguistic cues like gesture, some through less-linguistic cues like intonation, and some is not even encoded at all but the speaker has some reasonable expectation that the listener can infer it. My research is focused on understanding what information is transmitted via what mechanism, and how.
Associate Professor, Harvard University
I am interested in the intersection of cognition, neuroscience, and artificial intelligence. My current work examines how the brain acquires structured representations of the world in a variety of domains, including motion perception, reinforcement learning, and semantic cognition.
I want to understand how to represent, learn, and reason with structured knowledge. I work on probabilistic programming as a means of knowledge representation, and probabilistic inference as a method of machine learning and reasoning. I am broadly interested in topics in cognitive science and artificial intelligence that contribute to this project, including concept learning, theory of mind, game theory, and decision theory.
My interests lie at the intersection between cognitive science and artificial intelligence. Broadly, I want to understand how people integrate perception and reasoning to understand the world around them. I am particularly interested in computationally specifying the algorithms underlying reasoning, as well as the knowledge representations accessed by those algorithms, by drawing on methods from machine learning and statistics. I have most recently focused on how people reason about every day physical events, modeling this “intuitive physics” through knowledge-rich simulations consistent with Newtonian physics (copied from Tom Griffiths’ lab website at UC Berkeley).
Assistant Professor, Brigham Young University
My research interests lie at the intersection of perception, control and cognition, and how all three have synergistic effects on learning. Specific interests include reinforcement learning, unsupervised learning of useful knowledge representations (including predictive representations of state and structured nonparametric Bayesian distributions), information theory, manifold learning, kernel methods, massively parallel processing, visual perception, and optimal control.
Assistant Professor, Duke
I am interested in developing new statistical methods, using hierarchical and nonparametric Bayesian models, for extracting useful information from data when little or no supervision is available. I aim to use these methods to model human behavior, including categorization, and social interactions in online environments.
Associate Professor, CMU
My broad research interests involve developing learning and inference algorithms for probabilistic hierarchical models that contain many layers of nonlinear processing. Some of my recent work has concentrated on the theoretical analysis and learning of Deep Boltzmann Machines, with applications to information retrieval, visual object recognition, and nonlinear dimensionality reduction. My other interests include Bayesian inference, transfer learning, matrix factorization, and approximate inference and learning of large scale graphical models.
Assistant Professor, University of Toronto
My research interests lie at the intersection of computer science, statistics and probability theory; I study probabilistic programming languages to develop computational perspectives on fundamental ideas in probability theory and statistics. I am particularly interested in the use of recursion to define nonparametric distributions on data structures; representation theorems that connect computability and probabilistic structures; the complexity of inference; and the limitations of probabilistic computation and other questions in computable probability theory.
Assistant Professor, UC Berkeley
I am interested in how people use probabilistic inference to acquire and process interestingly structured information. I am also interested in information theory, theory of computation, philosophy of mind, and symbolic dynamics.
Postdoc, Broad Institute
My research centers on language as a symbolic communication system that makes infinite use of finite means. I believe that the question how such a system is represented and implemented in the brain is fundamental to cognitive science. In the past, I have employed different methods for studying this question — from structural descriptions of syntactic phenomena, to Bayesian models and reaction-time experiments.
Associate Professor, Stanford
i approach the study of mind with a combination of formal (mathematical) analysis, philosophical orientation, and empirical grounding. my research focusses on concepts and causality: what is the nature of causal and conceptual knowledge? how do we acquire this knowledge, and how do we use it?
Associate Professor, Stanford
In order to communicate successfully, children acquiring a language have to learn to segment words from continuous speech, learn the meanings of those words, and figure out how to put them together to make coherent sentences. I’m interested in all three of these problems, and I study them using artificial language learning experiments with adults and infants as well as probabilistic computational models. I work jointly with Josh Tenenbaum and Ted Gibson.
Assistant Professor, UCSD
Without constraints and assumptions, it is impossible to figure out what sorts of stuff in the physical world caused our retinal input. I am primarily interested in the priors and structures our visual system uses to to solve this problem given limited resources. To this end, I study adaptation, attention, and other visual processes with psychophysics, computational methods, and fMRI.
Professor, TU Darmstadt
Without the ability to form concepts and categorize accordingly the world would appear to be a chaotic place. How do we learn new concepts? How are concepts represented in memory? How are concepts related to the world? I’m trying to address these questions by combining insights from machine learning and cognitive psychology.
Associate Professor, Rutgers University Newark
I started as Josh Tenenbaum’s lab coordinator in July of 2002. Now I’m a graduate student at MIT working jointly with Professor Laura Schulz in the Early Childhood Cognition Lab and with Professor Josh Tenenbaum. I’m interested in the development of human causal reasoning from infancy to adulthood and in the computational models that may shed insight on that process.
Departmental Fellow in Systems Biology, Harvard Medical School
I’d like to explore how ideas from Bayesian statistics and machine learning could help us interpret and systematically analyze neural and biological data.
Product Manager, Google
I’m interested in how people learn the meanings of words and infer relationships between words or between concepts. I’m also interested in how learning language can influence conceptual structure and development. One of my recent projects looks at how people can understand what is sensible (but possible, extremely rare, or not occurring in nature, like a blue banana) and what is nonsense (like an hour-long banana) based on the limited evidence of what actually occurs in the world.
Associate Professor, Carnegie Mellon University
At some level, semantic representations are mathematical objects. I’m interested in finding a small set of structures and operations that can be composed to build these objects. I also enjoy thinking about formal models of social systems.
Associate Professor, University of Melbourne
I’m interested in applying Bayesian models to aspects of cognitive development, in particular to issues of learnability. What biases must children have in order to acquire knowledge in different domains (syntax, word and feature learning, understanding of kinds)? To what extent are these biases domain-general?
Assistant Professor, Ben-Gurion University of the Negev
How can computers learn to make good decisions in groups comprising both people and other computer agents? I study this question by developing representations and algorithms for learning the social factors that affect people’s decision-making in a variety of domains, such as negotiation, intelligent tutors and game playing.
I’m interested in understanding how anything made of non-intelligent parts could behave as intelligently as a human being. More specifically, I develop models and inference algorithms that combine the ability of probability theory to quantify uncertainty, with the ability of first-order logic to efficiently describe large sets of related objects. I’m a post-doc in Leslie Kaelbling’s group at CSAIL, but I also collaborate with Josh Tenenbaum’s group.
Associate Professor, Rutgers University – Newark
I am interested in the kinds of things that make people seem clever — especially the ability to make robust, flexible, & reliable inferences from limited data. Particular areas of recent interest are learning multiple ways of organizing knowledge in a domain, flexible use of background knowledge to support inferences in different contexts, and learning in pedagogical & communicative settings. I try to understand these abilities through formal mathematical analyses and behavioral experiments. The goal of my research is to try to develop an understanding of people’s “real-world” reasoning.
Tevye Rachelson Krynski
Engineering Manager, Leanplum
I am interested in using cognitive models of human belief and causality reasoning to understand psychological phenomena such as base rate neglect. The ultimate goal of my research would be not just a computational model of human cognition, but also a method for developing AI systems that think like people. I got my PhD in BCS in 2006.
I did computational and cognitive neuroscience in the group of Josh Tenenbaum, BCS, MIT. (previously with Daniel Wolpert and Peter Konig). I specialize in modelling and movement psychophysics. I expect my theories to make experimental predictions, provide a compact description of data and lead to computationally strong algorithms. My experiments should falsify theories.
Professor, UC Berkeley
My research interests are developing computational models of higher level cognition. In particular, I’m interested in developing rational accounts of cognition using probabilistic generative models and Bayesian statistics. My current areas of interest are understanding people’s everyday inductive leaps – difficult inductive problems we solve every day, like predicting the future, learning causal relationships, and noticing coincidences – and the interface between psychology and machine learning in developing statistical models of language.
Professor, UC Irvine
My research interests span a diverse set of topics in cognitive science such as episodic and semantic memory, dynamic decision making, and causal reasoning. In each of these areas, I combine mathematical and computational modeling with behavioral experiments. The models and experiments are tightly coupled: I try to formulate empirical questions with the goals of constraining, developing, or testing between alternative computational models of how people learn, process, and represent information. My research interests also include some computer science topics in the domain of statistical machine learning and information retrieval. The adoption of recent machine learning methodology is useful in advancing cognitive science research, especially in the area of semantic memory.
I’m interested in some of the conceptual groundwork that might someday make psychology more coherent — how we (or anything) can weave a web of concepts to explain experience. More technically, I’m interested in how to extend the range of probabilistic models of induction to richer descriptions of experience than those available to traditional probabilistic models. I’m also interested in the form of the associations of words with conceptual formulae, how we learn those associations, and how we use them to invoke thoughts in each other and ourselves.
My research in the Tenenbaum lab is focused on exploring computational models of how humans learn and generalize through inductive inference. My undergraduate honors thesis analyzed how a computer might construct a hypothesis space (i.e. candidate guesses about the concept to be learned) that could match human generalization performance. In that work, I examined several different unsupervised learning techniques to build a hypothesis space for a Bayesian concept learner. Recently, I have been looking at how humans seem to be using, on an abstract level, taxonomic, tree-based models of similarity to guide their generalization. In my other lab, I am working on understanding the dynamics underlying neural computations in small networks of neurons.
I study the neural and psychological basis of social cognition. Do we have dedicated mechanisms for recognising and/or reasoning about other minds? How and why does the human brain succeed so easily where computers and logicians fail? Addressing these questions, my work spans the disciplines of cognitive neuroscience, developmental psychology, social psychology, computational modelling and philosophy.
PhD student, Caltech
I graduated from the Brain and Cognitive Science Department at MIT doing a UROP with Professor Tenenbaum. My personal research interests for the future include modeling human social cognition.
I graduated from the Brain and Cognitive Science Department at MIT doing a UROP with Professor Tenenbaum. I am primarily interested in concept learning and conceptual change during learning and development.
I graduated from Brain and Cog Sci, only to stick around MIT in the Speech and Hearing Bioscience and Technology program. (I’m a cognitive scientist at heart, of course.) I’m interested in language — both its evolution as a communication system and its purpose as an internal evoker of mental representations — and how speech sounds might act as the elementary units of those representations.
I graduated MIT (majoring in BCS & minoring in music.) My research interest lies in searching for the fundamental ways humans generalize and understanding how they interpret novel ideas.