Research summary
Modern machine learning allows computers to think and act with less human intervention, while deep learning enables them to learn to think in ways that are similar to the human brain. Both innovations have come a long way in recent years. But breakthroughs in these fields have also led to concerns about transparency and fairness of machine learning systems. That has made interpretability—or the ability to explain and present machine learning systems to humans—all the more important.
As Canada Research Chair in Interpretability for Machine Learning, Dr. Stan Matwin is tackling several important aspects of interpretability. In particular, he and his research team are studying new uses of relational knowledge representation techniques.