The study of artificial intelligence (AI) and neuroscience have many things in common. At its core, neurosciences aim to better understand the brain by deciphering its complex networks and processes.
Complimentarily, many AI-focused research projects involve constructing synthetic components of the human brain. The connection of these fields benefits both computer scientists and biology-focused neuroscientists as they help us understand natural and artificial learning systems.
These domains of research lend themselves to be inspired by one another. In the 1950's, researchers had already begun looking at how they may be able to model information processing capabilities of human neurons. Today, AI is giving way to new tools for neuroscience research, some of which are contributing to new hypotheses for how cognitive processes and tasks are performed in the brain.
On Saturday, July 9 from 15:15-16:45, international experts will gather to discuss a wide range of subjects related to the interaction between neuroscience and AI.
Dr. Kanaka Rajan, Assistant Professor at the Friedman Brain Institute at the Icahn School of Medicine at Mount Sinai, leverages data from neuroscience experiments and combines it to powerful computational frameworks to build artificial models of the brain. As a computational neuroscientist, her work aims to bridge the gap between AI researchers' drive to find high-performing systems for a specific goal or application, and biologists' goal of discovering how a system solve problems and make predictions from models that can further drive novel hypotheses about brain function.
During the session, Dr. Rajan will discuss Curriculum Learning a new approach her lab is taking to probe learning mechanisms in both biological and artificial brains. This approach imitates meaningful learning order in human curricula by training a machine learning model starting with easier to progressively harder training set, or "curricula".
As an experimental cognitive neuroscientist, Prof. Stanislas Dehaene from College de France, will argue that our species is truly unique in its capacity to learn and that at least for now, our brain can still learn better than any machine that exists!
In addition to elaborating on some of the work featured in his 2020 book, How We Learn, he will also speak to new research being carried out in his lab. One specific area he and his team are looking into, is humans' impressive ability to find structures in sequences (like in grammar) or in space (as with geometry). The data the team have generated so far poses a challenge to current artificial neural networks, who do not achieve similar performance so far, and are generally poor with symbols and grammar.