Next-Generation Machine Learning
Artificial neural networks are state-of-the-art machine learning algorithms inspired by the human brain. Programmed to “learn” step-by-step as they complete a series of tasks, neural networks have dramatically changed computer programming for imaging and language understanding, robotics, genetics, chemistry and more.
Despite these advances, neural networks are time-consuming to implement and debug—and our traditional approaches to designing them are barely keeping up with the knowledge gained.
Dr. Roger Grosse, Canada Research Chair in Probabilistic Inference and Deep Learning, is applying detailed theoretical and empirical analyses of machine learning algorithms (like neural networks) to solve real-world problems. He and his research team are working to improve the power and reliability of neural networks while making them less difficult to program and build. Grosse and his team are tackling the problem of neural networks assigning high confidence to incorrect predictions: work that is essential for the design of things like self-driving cars and dialogue systems (such as Siri or Alexa).
Grosse’s work will also help us understand why neural networks make the decisions they do, so programmers can better troubleshoot problems. His pioneering approach to recording how we configure the programming layers needed to build a neural network is a key step toward faster, less costly and more reliable automated design.
By advancing our use and understanding of machine learning in such innovative ways, Grosse is broadening the types of questions we can ask and answer with 21st century computer science.