Data visualisation, web scraping, and text analysis in RMethodology courses and philosophy of science


Course information


ECTS: 2.5 
Number of sessions: 4
Hours per session: 4
Entry level: Advanced
Course fee:

  • free for PhD candidates of the Graduate School
  • € 440,- for non-members
  • consult our enrolment policy for more information

Contact:

Telephone: +31 10 4082607 (Graduate School).


Session 1
May 17 (Thursday) 2018
09:30-12:30
G building (directions) G3-32

Session 2
May 17 (Thursday) 2018
13.00-17:00G building (directions) G2-35

Session 3
May 24 (Thursday) 2018
09:30-12:30
Sanders building (directions) 1-08

Session 4
May 24 (Thursday) 2018
13.00-17:00
Sanders building (directions) 1-08


NOTE
This course was previously entitled "Big data analysis and visualisation". The course content remains largely unchanged. 


Introduction

Increasingly scholars from the social sciences and humanities use ‘big data’ to conduct research; these data can be obtained from a wide variety of online sources, such as web sites, social media, or from external data providers (for example, Statistics Netherlands).

This course introduces issues of collecting, preparing, analysing, and visualising ‘big’ data. Participants will be familiarised with how to write, debug, and keep track of their own code using R, a popular programming language for data manipulation, analysis, and visualisation.


Learning objectives

  • To acquire a basic understanding of big data and social media analytics in the context of social science and humanities research
  • To be able to write code in R in order to obtain, prepare, analyse, and visualise data
  • To understand how to automate data collection from web sites and social media
  • To gain basic proficiency with tools for analysing large quantities of text
  • To be able to monitor and manage the various steps of data collection and analysis for both integrity and replication purposes
  • To help you become a more productive (taking less time to analyse your data) and careful (making fewer mistakes) scientist

Aims and working method

There are four weekly sessions of 3-4 hours each. Sessions will include a mix of lectures, demonstrations, and/or in-class exercises. You will need to bring a laptop to these sessions on which you have the necessary rights to install software.

Students will work with data sets supplied for the course, but can also use a data set of their own to work with. Data can be from any source: experiments, surveys, time series, panels, etc.


Required programming skills

Students following this course are expected to satisfy the following requirements:

  • Prior exposure to R programming language. This is a very low threshold of knowledge, and one that can be attained, for example, by following an online tutorial or course.
    Visit www.jasonmtroos.com/learning-r for a list of resources.
  • Knowledge of basic probability theory and statistical analysis, for example, regarding linear models or analysis of variance. If you are in doubt about your background, contact the Graduate School office (Jan Nagtzaam: nagtzaam@esg3h.eur.nl).

Session descriptions

Sessions are both iterative and cumulative, hence attendance for all four sessions is mandatory. In the first session, you will follow a tutorial that encompasses many of the tools you will eventually encounter in the course, but it is not expected at this stage that you will understand every aspect of this exercise.

In each session, we will build upon the previous, adding new tools while reinforcing what you have already learned. The goal is that by the fourth session, you will have learned enough to apply these tools to your own research.

Exercises
Between sessions, you will complete exercises in order to practice and develop your new skills. Although these exercises will not be graded, their completion is mandatory, as students will review and attempt to replicate each other’s work throughout the course.

  • Session 1:
    Course overview and first steps with R

    • You will create, edit, and compile an R-markdown file that contains both a free text discussion of your data analysis, your code, and any output from that code (including plots).
    • We will build an R-markdown file that collects data from an online source, performs a few basic manipulations, and plots the results. You will learn how to use version control software to track changes to this markdown file over time.

  • Session 2:
    Acquiring, preparing, and visualising data

    • You will learn how to write code to acquire data from files located on the web or stored on your local computer, load them into R, and “clean” the data in preparation for further analysis (such as data visualisation). You will then learn about a powerful yet relatively simple “grammar” for visualising data that has been implemented in the ggplot2 package in R.
    • We will also discuss the underlying theory that drives this grammar (including the psychological principles behind effective data visualisation), and gain an appreciation for how visualisation can lead to insights about data more quickly than statistical analysis.

  • Session 3:
    Obtaining data from web sites and social media

    • You will learn how to acquire data from various online sources, such as web pages and the Twitter API, and how to automation these procedures. You will continue to gain practice preparing, analysing, and visualising these data.

  • Session 4:
    Text and sentiment analysis

    • You will learn how to process large amounts of unstructured data (e.g. text documents) to extract important features (e.g., the occurrence of special words). You will also learn how to conduct automatic sentiment analysis (scoring text based its positivity or negativity).


About the instructor

Jason Roos is an Assistant Professor at the Department of Marketing Management of the Rotterdam School of Management (RSM), Erasmus University Rotterdam (EUR). His research focuses on issues related to new media and the Internet, as well as the entertainment industry.

Jason received his PhD from Duke University's Fuqua School of Business. Before he entered academia, he was a consultant and software engineer in the Seattle area during the original dot-com bubble, having worked on projects for Microsoft, BP, and AT&T Wireless, and the U.S. Government.