Political Science and the “Big Data” Revolution
The “big data” revolution is upon us: the increased amount and availability of data are driving the development of more and more flexible techniques to analyze large data files. One of the biggest questions facing researchers is: How does what we learn from these new methods relate to what we have learned through the last 75 years of data collection and analysis?
Since the late 1950s, political scientists have used the same method for testing theories. The basic idea is to use observations of the world to generate theory, use theory to generate new hypotheses about how the world should work, and then collect and analyze data to test those hypotheses. But in a “big data” world, the importance of theory is diminished in favour of sophisticated software that can relate causes to consequences in a very flexible way. The focus is not on a test, but on trying to predict the political outcome of interest.
Dr. David Armstrong, Canada Research Chair in Political Methodology, aims to integrate these two approaches. He and his research team are looking to merge the flexibility of methods for discovering relationships in big data with more conventional methods that allow for the testing of hypotheses. Their research will allow for a much deeper and more nuanced understanding of how political phenomena work.
Armstrong's research has broad implications across the medical, natural and social sciences. In fields where experiments (the gold standard of understanding causality) are simply not possible, these new methods will allow researchers to make more causal statements using observational (non-experimental) data.