PinterestGoogle+

Big data, computer science, experimental methods, and computational text analysis are part of an ever-growing range of methods embraced by political science. As we expand our methodological horizons, the nature, scope, and focus of the questions we investigate will change.

For example, the combination of computerised text analysis techniques and Big Data now allows us to model millions of texts in a matter of hours. This can help us to gauge political preferences, the evolution of political language, or popular sentiment.

In turn, the increased use of the experimental method has allowed political scientists to investigate a wide set of issues, including voter behaviour, public opinion, and decision-making processes.

This new series, co-hosted by the Oxford University Politics Blog and the Oxford Q-Step Centre is all about “methods”.  What advances have we seen in recent years? What can we learn today that we could not a decade ago? And, what is the future of methods in political science? Find out more in our Advances in Political Science Methods series.

About Q-Step

Oxford-QC-Primary-Full-Tree-LogoQ-Step is a £19.5 million programme designed to promote a step-change in quantitative social science training in the UK Funded by the Nuffield Foundation, ESRC and HEFCE, Q-Step was developed as a strategic response to the shortage of quantitatively-skilled social science graduates.

Q-Step is funding fifteen universities across the UK to establish Q-Step Centres that will support the development and delivery of specialist undergraduate programmes, including new courses, work placements and pathways to postgraduate study.

The resulting expertise and resources will be shared across the higher education sector through an accompanying support programme which will also forge links with schools and employers.

Oxford is one of 15 universities to be selected nationally to host Q-Step. The Oxford Q-Step Centre is hosted by the Department of Politics and International Relations, in close co-operation with the Department of Sociology, University of Oxford.  

 

About thirty major pieces of government legislation are produced annually in the UK. As there are five main opportunities to amend each bill (two stages in the Commons and three in the Lords) and bills may undergo hundreds, even thousands, of amendments, comprehensive quantitative analysis of legislative changes is almost impossible by manual methods. We used insights from bioinformatics to develop a semi-automatic procedure to map the changes in successive versions the text of a bill as it passes through parliament. This novel tool for scholars of the parliamentary process could be used, for example, to compare amendment patterns over time, …

The final Presidential debate of 2016 was as heated as the previous two—well demonstrated by the following name-calling exchange: CLINTON: …[Putin would] rather have a puppet as president of the United States. TRUMP: No puppet. No puppet. CLINTON: And it’s pretty clear… TRUMP: You’re the puppet! CLINTON: It’s pretty clear you won’t admit … TRUMP: No, you’re the puppet. It is easy to form our opinions of the debate and on the differences between the Presidential candidates on excerpts like this and memorable one-liners. But are small extracts representative of the debate as a whole? Moreover, how can we objectively analyse …

How can we improve the quality of post-election survey data on electoral turnout? That is the core question of our recent paper. We present a novel way to question citizens about their voting behaviour that increases the truthfulness of responses. Our research finds that the inclusion of “face-saving” response items can drastically improve the accuracy of reported turnout. Usually, the turnout reported in post-election surveys is much higher than in reality, and this is partly due to actual abstainers pretending that they have voted. Why do they lie? In many countries, voting is a social norm widely shared by the …

How can big data and data science help policy-making? This question has recently gained increasing attention. Both the European Commission and the White House have endorsed the use of data for evidence-based policy making. Still, a gap remains between theory and practice. In this blog post, I make a number of recommendations for systematic development paths. Research trends shaping Data for Policy ‘Data for policy’ as an academic field is still in its infancy. A typology of the field’s foci and research areas are summarised in the figure below.     Besides the ‘data for policy’ community, there are two …

Is it possible to have a more accurate prediction by asking people how confident they are that their preferred choice will win the day? As the Brexit referendum date approaches, the uncertainty regarding its outcome is increasing. And, so are concerns about the precision of the polls. The forecasts are, once again, suggesting a very close result. Ever since the general election of May 2015, criticism against pollsters has been rampant. They have been accused of complacency, herding, of making sampling errors, and even of deliberate manipulation of their results. The UK is hardly the only country where pollsters are …

Dr Ben Lauderdale from the Department of Methodology at the London School of Economics and Political Science delivered a special masterclass on Tuesday 24th May 2016. Ben’s research focuses on the measurement of political preferences from survey, voting, network and text with a particular focus on using text data. This event presented the latest developments in ways social scientists can use text and provides an excellent opportunity to explore the promises but also the limitations of this quickly expanding research field. For further information on text analysis in social science see Felix Krawatzek and Andy Eggers Podcast Series.