When we read, our eyes send signals to our brains, which then interpret those signals to decipher words. As Canada Research Chair in Computational Neuroscience, Dr. Joel Zylberberg is identifying the “language” used in this signalling.
Zylberberg and his research team are training deep learning algorithms from artificial intelligence (AI) to respond to visual scenes the same way the brain’s visual cortex does. This yields “camera-to-brain” translator algorithms that can be paired with precision brain stimulation methods to write visual information directly into the brain. By mimicking the mammalian visual system, these algorithms will lead to implantable devices that can stimulate a blind person’s brain and restore their ability to see. Zylberberg’s work will also have important implications for improving technologies that rely on visual judgements, like autonomous vehicles.