Assignment 5: Interpreting Data

For this assignment you will write a short news story about the status of women in academic science. You can use any sources you like. The recent paper Women in Academic Science: A Changing Landscape by Ceci. et al. contains a lot of data you might find relevant however I expect you to use multiple sources.

I will look for the following things in marking your story:

1) What question or questions are you using data to answer? “The status of women” could mean many different things. You must be clear about what you are writing about, and how this relates to the broader concept described by the words “the status of women in academic science.”

2) What data did you choose to get your answer? Please clearly present this data and explain what it means. Include tables or graphs if appropriate.

3) How was this data collected? How do you know it is accurate? How do you know it means what you think it means? I want to see that you have at least considered the questions on the “interview the data” slide.

4) What other hypotheses, if any, fit the data you have chosen? Why is your explanation more correct than the other possible explanations? Could multiple explanations be true at the same time?

5) Is there other data that supports your conclusion? What about non-data sources of information? We have discussed triangulation many times in class, and I want to see evidence of it here. A strong argument combines different types of evidence from different sources.

6) Have you accounted for the possibility that what you see in the data has happened by chance? What are the sources of random variation here, and why is the pattern you see not likely to occur by chance?

7) Your story must be a maximum of 500 words and written for a general audience. That means you cannot assume that the reader is familiar with data concepts. If you need to use an idea that would not be familiar to someone who has never studied statistics or worked with data, you must explain what that idea means.

Due Dec 18 before class.

Assignment 4: Social Network Analysis

For this assignment you will analyze a social network using three different centrality algorithms, and compare the results.

1. Download and install Gephi, a free graph analysis package. It is open source and runs on any OS.

2. Download the data file lesmis.gml from the UCI Network Data Repository.  This is a network extracted from the famous French novel Les Miserables — you may also be familiar with the musical and the recent movie. Each node is a character, and there is an edge between two characters if they appear in the same chapter. Les Miserables is written in over 300 short chapters, so two characters that appear in the same chapter are very likely to meet or talk in the plot of the book. The edges are weighted, and the weight is the number of chapters those characters appear together in.

3. Open this file in Gephi, by choosing File->Open. When the dialog box comes up, set the “Graph Type” type to “Undirected.” The graph will be plotted. What do you see? Can you discern any patterns?

4. Now arrange the nodes in a nicer way, by choosing the “Force Atlas 2″ layout algorithm from the Layout menu at left and pressing the “Run” button. When things settle down, hit the “Stop” button. The graph will be arranged nicely, but it will be quite small.  You can zoom in using the mouse wheel (or two fingers on the trackpad on a mac) and pan using the right mouse button.

5. Select the “Edit” tool from the bottom of the toolbar on the left. It looks like a mouse pointer with question mark next to it:

6. Now you can click on any node to see its label, which is the name of the character it represents. This information will appear in the “Edit” menu in the upper left. Here’s the information for the character Gavroche.

Click around the various nodes in the graph. Which characters have been given the most central locations? If you are familiar with the story of Les Miserables, how does this correspond to the plot? Are the most central nodes the most important characters?

7. Make Gephi color nodes by degree. Choose the “Ranking” tab from panel at the upper left, then select the “Nodes” tab, then “Degree” from the drop-down menu. Press the “Apply” button.

Now the nodes with the highest degree will be darker. Do these high degree nodes correspond to the nodes that the layout algorithm put in the center? Are they the main characters in the story?

8. Now make Gephi compute betweenness and closeness centrality by pressing the “Run” button for the Network Diameter option under “Network Overview” in to the right of the screen.

You will get a report with some graphs. Just click “Close”. Now betweenness and closeness centrality will appear in the drop-down under “Ranking,” in the same place where you selected degree centrality earlier, and you can assign colors based on either run by clicking the “Apply” button.

Also, the numerical values for betweenness centrality and closeness centrality will now appear in the “Edit” window for each node.

Select “Betweenness Centrality” from the drop-down meny and hit “Apply.” What do you see? Which characters are marked as important? How does it differ from the characters which are marked as important by degree?

Now selecte “Closeness Centrality” and hit “Apply.” (Note that this metric uses a scale which is the reverse of the others — closeness measures average distance to all other nodes, so small values indicate more central nodes. You may want to swap the black and white endpoints of the color scale to get something which is comparable to the other visualizations.) How does closeness centrality differ from betweeness centrality and degree? Which characters differ between closeness and the other metrics?

9. Which centrality algorithm would you prefer to use to understand the structure of Les Miserables? Why? How would you validate your choice if you didn’t already know the story? That is the situation a journalist is in when they analyze unknown data.

10. Turn in: your answers to the questions in steps 3, 6, 7, 8 and 9, plus screenshots for the graph plotted with degree, betweenness centrality, and closeness centrality. (To take a screenshot: on Windows, use the Snipping Tool. On Mac, press ⌘ Cmd + ⇧ Shift + 4. If you’re on Linux, you get to tell me)

What I am interested in here is how the values computed by the different algorithms correspond to the plot of Les Miserables (if you are familiar with it), and how they compare to each other. Telling me that “Jean Valjean has a closeness centrality of X” is not a high-enough level interpretation — your couldn’t publish that in a finished story, because your readers won’t know what that means.

 

Assignment 3: Entity extraction

For this assignment you will evaluate the performance of OpenCalais, a commercial entity extraction service. You’ll do this by building a text enrichment program, which takes plain text and outputs HTML with links to the detected entities. Then you will take five random news articles, enrich them, and manually count how many entities OpenCalais missed or got wrong.

1. Get an OpenCalais API key, from this page.

2. Install the python-calais module. This will allow you to call OpenCalais from Python easily. First, download the latest version of python-calais. To install it, you just need calais.py in your working directory. You will probably also need to install the simplejson Python module. Download it, then run “python setup.py install.” You may need to execute this as super-user.

Edit: python-calais no longer works.

Here’s the info on the new Open Calais API in Python. All I needed to know was here: http://www.opencalais.com/wp-content/uploads/2015/06/Thomson-Reuters-Open-Calais-Upgrade-Guide-v4.pdf

Here’s a little Python excerpt using the Requests library to make the call.

def analyze(content_string):
   header_content = {'Content-Type': 'text/raw',
'x-ag-access-token': API_KEY,
'outputFormat': 'application/json'}
   r = requests.post('https://api.thomsonreuters.com/permid/calais',
headers=header_content,
data=content_string)
   return r.json()

3. Call OpenCalais from Python. Make sure you can successfully submit text and get the results back, following these steps. The output you want to look at is in the entities array, which would be accessed as “results.entities” using the variable names in the sample code. In particular you want the list of occurrences for each entity, in the “instances” field.

>>> result.entities[0]['instances']
[{u'suffix': u' is the new President of the United States', u'prefix': u'of the United States of America until 2009.  ', u'detection': u'[of the United States of America until 2009.  ]Barack Obama[ is the new President of the United States]', u'length': 12, u'offset': 75, u'exact': u'Barack Obama'}]
>>> result.entities[0]['instances'][0]['offset']
75
>>>

Each instance has “offset” and “length” fields that indicate where in the input text the entity was referenced. You can use these to determine where to place links in the output HTML.

Edit: OpenCalais returns a different result now, more like this

for result[x] where x!="doc"
   for result[x]["_typeGroup"] == "entities"
      offset = result[x]["instances"][j]["offset"]
      length = result[x]["instances"][j]["length"]
      url = result[x]["resolutions"][j]["id"]

 

4. Read a text file, create hyperlinks, and write it out. Your Python program should read text from stdin and write HTML with links on all detected entities to stdout. There are two cases to handle, depending on how much information OpenCalais gives back.

In many cases, like the example in step 3, OpenCalais will not be able to give you any information other than the string corresponding to the entity, result.entities[x][‘name’]. In this case you should construct a Wikipedia link by simply appending to the name to a Wikipedia URL, converting spaces to underscores, e.g.

http://en.wikipedia.org/wiki/Barack_Obama

In other cases, especially companies and places, OpenCalias will supply a link to an RDF document that contains more information about the entity. For example.

>>> result.entities[0]{u'_typeReference': u'http://s.opencalais.com/1/type/em/e/Company', u'_type': u'Company', u'name': u'Starbucks', '__reference': u'http://d.opencalais.com/comphash-1/6b2d9108-7924-3b86-bdba-7410d77d7a79', u'instances': [{u'suffix': u' in Paris.', u'prefix': u'of the United States now and likes to drink at ', u'detection': u'[of the United States now and likes to drink at ]Starbucks[ in Paris.]', u'length': 9, u'offset': 156, u'exact': u'Starbucks'}], u'relevance': 0.314, u'nationality': u'N/A', u'resolutions': [{u'name': u'Starbucks Corporation', u'symbol': u'SBUX.OQ', u'score': 1, u'shortname': u'Starbucks', u'ticker': u'SBUX', u'id': u'http://d.opencalais.com/er/company/ralg-tr1r/f8512d2d-f016-3ad0-8084-a405e59139b3'}]}
>>> result.entities[0]['resolutions'][0]['id']
u'http://d.opencalais.com/er/company/ralg-tr1r/f8512d2d-f016-3ad0-8084-a405e59139b3'
>>>

Edit: change JSON paths to new output format. Also: remind to use offset/length to find entities in input text, don’t search as capitalization etc. may be different (also pronouns)

In this case the resolutions array will contain a hyperlink for each resolved entity, and this is where your link should go. The linked page will contain a series of triples (assertions) about the entity, which you can obtain in machine-readable from by changing the .html at the end of the link to .json. The sameAs: links are particularly important because they tell you that this entity is equivalent to others in dbPedia and elsewhere.

Here is more on OpenCalias’ entity disambiguation and use of linked data.

The final result should look something like below. Note that some links go to OpenCalais entity pages with RDF links on them (“London”), some go to Wikipedia (“politician”) and some are broken links when Wikipedia doesn’t have the topic (“Aarthi Ramachandran”) And of course Mr Gandhi is an entity that was not detected, three times.

The latest effort to “decode” Mr Gandhi comes in the form of a limited yet rather well written biography by a political journalist, Aarthi Ramachandran. Her task is a thankless one. Mr Gandhi is an applicant for a big job: ultimately, to lead India. But whereas any other job applicant will at least offer minimal information about his qualifications, work experience, reasons for wanting a post, Mr Gandhi is so secretive and defensive that he won’t respond to the most basic queries about his studies abroad, his time working for a management consultancy in London, or what he hopes to do as a politician.

Don’t worry about producing a fully valid HTML document with headers and a <body> tag, just wrap each entity with <a href=”…”> and </a>. Your browser will load it fine.

5. Pick five random news stories and enrich them. Pick an English-language news site with many stories on the home page, or a section of such a site (business, sports, etc.) Then generate five random numbers from 1 to the number of stories on the page. Cut and paste the text of each article into a separate  file, and save as plain text (no HTML, no formatting.)

Edit: go through articles by hand counting entities. Really need to drive the point home that we can only compute precision/recall relative to a human ground truth.

6. Read the enriched documents and count to see how well OpenCalais did. You need to read each output document very carefully and count three things:

  • Entity references. Count each time there is a name of a person, place, or organization appears, or other references to these things (e.g. “the president.”)
  • Correctly detected entities. Count an entity as correctly detected if the link goes to the right page — OpenCalais RDF pages where possible, Wikipedia when not.
  • Incorrectly detected entities. Count as incorrect if the link goes to the wrong page, a disambiguation page in Wikipedia. Also, a broken link counts as an incorrect reference.

7. Turn in your work. Please turn in:

  • Your code
  • The enriched output from your documents
  • A brief report describing your results.
  • The totals as described below

The report should include a table of the three numbers — references, correct, incorrect — for each document, plus the totals of these three numbers across all documents. Also report on any patterns in the failures that your see. Where is OpenCalais most accurate? Where is it least accurate? Are there predictable patterns to the errors?

This assignment is due before class on Friday,  November 20.

Assignment 2: Filter Design

For this assignment you will design a hybrid filtering algorithm. You will not implement it, but you will explain your design criteria and provide a filtering algorithm in sufficient technical detail to convince me that it might actually work — including psuedocode.

1. Decide who your users are. Journalists? Professionals? General consumers? Someone else?

2. Decide what you will filter. You can choose:

  • Facebook status updates, like the Facebook news feed
  • Tweets, like Trending Topics or the many Tweet discovery tools
  • The whole web, like Google News
  • something else, but ask me first

3. List all available information that you have available as input to your algorithm. If you want to filter Facebook or Twitter, you may pretend that you are the company running the service, and have access to all posts and user data — from every user. You also also assume you have a web crawler or a firehose of every RSS feed or whatever you like, but you must be specific and realistic about what data you are operating with.

4. Argue for the design factors that you would like to influence the filtering, in terms of what is desirable to the user, what is desirable to the publisher (e.g. Facebook or Prismatic), and what is desirable socially. Explain as concretely as possible how each of these (probably conflicting) goals might be achieved through in software. Since this is a hybrid filter, you can also design social software that asks the user for certain types of information (e.g. likes, votes, ratings) or encourages users to act in certain ways (e.g. following) that generate data for you.

5. Write psuedo-code for a function that produces a “top stories” list. This function will be called whenever the user loads your page or opens your app, so it must be fast and frequently updated. You can assume that there are background processes operating on your servers if you like. Your psuedo-code does not have to be executable, but it must be specific and unambiguous, such that I could actually go and implement it. You can assume that you have libraries for classic text analysis and machine learning algorithms. So, you don’t have to spell out algorithms like TF-IDF or item-based collaborative filtering, or anything else you can dig up in the research literature, but simply say how you’re going to use such building blocks. If you use an algorithm we haven’t discussed in class, be sure to provide a reference to it.

6. Write up steps 1-5. The result should be no more than three pages. You must be specific and plausible. You must be clear about what you are trying to accomplish, what your algorithm is, and why you believe your algorithm meets your design goals (though of course it’s impossible to know for sure without testing; but I want something that looks good enough to be worth trying.)

Due before class, November 6

 

Assignment 1: Topic Modeling

This assignment is designed to help you develop a feel for the way topic modeling works, the connection to the human meanings of documents, and common ways of handling a time dimension. You will analyze the State of the Union speeches corpus, and report on how the subjects have shifted over time in relation to historical events.

1. Download the source data file state-of-the-union.csv. This is a standard CSV file with one speech per row. There are two columns: the year of the speech, and the text of the speech. You will write a Python program that reads this file and turns it into TF-IDF document vectors, then prints out some information. Here is how to read a CSV in Python. You may need to add the line

csv.field_size_limit(1000000000)

to the top of your program to be able to read this large file.

The file is a csv with columns year, text. Note: there are some years where there was more than one speech! Design your data structures accordingly.

2) Feed the data into gensim. Now you need to load the documents into Python and feed them into the gensim package to generate tf-idf weighted document vectors. Check out the gensim example code here. You will need to go through the file twice: once to generate the dictionary (the code snippet starting with “collect statistics about all tokens”) and then again to convert each document to what gensim calls the bag-of-words representation, which is un-normalized term frequency (the code snippet starting with “class MyCorpus(object)”

Note that there is implicitly another step here, which is to tokenize the document text into individual word features — not as straightforward in practice as it seems at first, but the example code just does the simplest, stupidest thing, which is to lowercase the string and split on spaces. You may want to use a better stopword list, such as this one.

Once you have your Corpus object, tell gensim to generate tf-idf scores for you like so.

3) Do LSI topic modeling. You can apply LSI to the tf-idf vectors, like so. You will have to supply the number of dimensions to keep. Figuring out a good number is part of the assignment. Print out the resulting topics, each topic as a lists of word coefficients. Now, sample ten topics randomly from your set for closer analysis. Try to annotate each of these ten topics with a short descriptive name or phrase that captures what it is “about.” You will likely have to refer to the original documents that contain high proportions of that topic, and you will likely find that some topics have no clear concept.

Turn in: your annotated topics plus a comment on how well you feel each “topic” captured a real human concept.

4) Now do LDA topic modeling. Repeat the exercise of step 3 but with LDA instead, again trying to annotate ten randomly sampled topics. What is different?

Turn in: your annotated topics plus a comment on how LDA differed from LSI.

5) Come up with a method to figure out how topics of speeches have changed over time. In the next step you will summarize the changes in the State of the Union speech in each decade of the 20th and 21st century. There are many different ways to use topic modeling to do this. Possibilities include: visualizations, grouping speeches by decade after topic modeling, and grouping speeches by decade before topic modeling. You can base your algorithm on either LSI or LDA, whichever you feel gives the most insight.

Turn in: describe your decade summarization algorithm and explain why you believe it will effectively summarize the speeches across a decade.

6) Analyze how the content of the speeches changed for each decade since 1900. Use your decade summarization algorithm to understand what the content of speeches was in each decade. What patterns do you see? Can you connect the terms to major historical events? (wars, the great depression, assassinations, the civil rights movement, Watergate…)

Turn in: write up what you see in narrative form, no more than 500 words, referring to your algorithmic output.

 

This assignment is due Friday, October 9 at 2:00 PM. You may email me the results. I am available for questions by email before then, or in person at office hours on Friday afternoons 1-2pm in the Tow Center.

 

Syllabus Fall 2015

The course is a hands-on introduction to the areas of computer science that have a direct relevance to journalism, and the broader project of producing an informed and engaged public. We will touch on many different technical and social topics: information recommendation systems but also filter bubbles, principles of statistical analysis but also the human processes which generate data, network analysis and its role in investigative journalism, visualization techniques and the cognitive effects involved in viewing a visualization. Assignments will require programming in Python, but the emphasis will be on clearly articulating the connection between the algorithmic and the editorial.

Our scope is wide enough to include both relatively traditional journalistic work, such as computer-assisted investigative reporting, and the broader information systems that we all use every day to inform ourselves, such as search engines and social media. The course will provide students with a thorough understanding of how particular fields of computational research relate to journalism practice, and provoke ideas for their own research and projects.

Research-level computer science material will be discussed in class, but the emphasis will be on understanding the capabilities and limitations of this technology. Students with a CS background will have opportunity for algorithmic exploration and innovation, however the primary goal of the course is thoughtful, application-oriented research and design.

Assignments will be completed in groups (except dual degree students, who will work individually) and involve experimentation with fundamental computational techniques. Some assignments will require intermediate level coding in Python, but the emphasis will be on thoughtful and critical analysis. As this is a journalism course, you will be expected to write clearly.

Format of the class, grading and assignments.
This is a fourteen week course for Masters’ students which has both a six point and a three point version. The six point version is designed for CS & journalism dual degree students, while the three point version is designed for those cross listing from other schools. The class is conducted in a seminar format. Assigned readings and computational techniques will form the basis of class discussion. The course will be graded as follows:

  • Assignments: 80%. There will be a homework assignment after most classes.
  • Class participation: 20%

Dual degree students will also have a final project. This will be either a research paper, a computationally-driven story, or a software project. The class is conducted on pass/fail basis for journalism students, in line with the journalism school’s grading system. Students from other departments will receive a letter grade.

Week 1: Basics – 9/11
Slides.
First we ask: where do computer science and journalism intersect? CS techniques can help journalism in four different areas: data-driven reporting, story presentation, information filtering, and effect tracking. Then we jump right in with the concept of data. Specifically, we study the quantification process, leading to feature vectors which are a fundamental data representation for many techniques.

Required

Recommended

  • Precision Journalism, Ch.1, Journalism and the Scientific Tradition, Philip Meyer

Viewed in class

Week 2: Clustering – 9/18
Slides.
A vector of numbers is a fundamental data representation which forms the basis of very many algorithms in data mining, language processing, machine learning, and visualization. This week we will explore two things: representing objects as vectors, and clustering them, which might be the most basic thing you can do with this sort of data. This requires a distance metric and a clustering algorithm — both of which involve editorial choices! In journalism we can use clusters to find groups of similar documents, analyze how politicians vote together, or automatically detect groups of crimes.

Required

Recommended

Viewed in class

Week 3: Text Analysis – 9/25
Slides.
Can we use machines to help us understand text? In this class we will cover basic text analysis techniques, from word counting to topic modeling. The algorithms we will discuss this week are used in just about everything: search engines, document set visualization, figuring out when two different articles are about the same story, finding trending topics. The vector space document model is fundamental to algorithmic handling of news content, and we will need it to understand how just about every filtering and personalization system works.

Required

Recommended

Examples

  • Watchwords: Reading China Through its Party Vocabulary, Qian Gang

Assignment: TF-IDF analysis of State of the Union speeches.

Week 4: Information overload and algorithmic filtering –  10/2
Slides.
This week we begin our study of filtering with some basic ideas about its role in journalism. Then we shift gears to pure algorithmic approaches to filtering, with a  look at how the Newsblaster system works (similar to Google News.)

Required

Recommended

Week 5: Social filtering – 10/9
Slides.
We have now studied purely algorithmic modes of filtering, and this week we will bring in the social. The distinction we will draw is not so much the complexity of the software involved, but whether the user can understand and predict the filter’s choices. We’ll look at Twitter as a prototypical social filter and see how news spreads on this network, and tools to help journalists find sources. Finally, we’ll introduce the idea of “social software” use the metaphor of “architecture” to think about how software influences behaviour.

Required

Recommended

Week 6: Hybrid filters, recommendation, and conversation – 10/16
Slides.
We have now studied purely algorithmic and mostly social modes of filtering. This week we’re going to study systems that combine software and people. We’ll look at comment voting, recommendation systems, and how Google search optimizes based on user preference. We’ll dig into the operation of the New York Times’ new recommendation engine which includes both content and collaborative filtering.

Required

Recommended

Assignment – Design a filtering algorithm for status updates.

Week 7: Visualization – 10/23
Slides.
An introduction into how visualisation helps people interpret information. Design principles from user experience considerations, graphic design, and the study of the human visual system. The Overview document visualization system used in investigative journalism.

Required

Recommended

No class 10/30

Week 8: Structured journalism and knowledge representation – 11/6
Slides.
Is journalism in the text/video/audio business, or is it in the knowledge business? This class we’ll look at this question in detail, which gets us deep into the issue of how knowledge is represented in a computer. The traditional relational database model is often inappropriate for journalistic work, so we’re going to concentrate on so-called “linked data” representations. Such representations are widely used and increasingly popular. For example Google recently released the Knowledge Graph. But generating this kind of data from unstructured text is still very tricky, as we’ll see when we look at the Reverb algorithm.

Required

Recommended

Assignment: Text enrichment experiments using OpenCalais entity extraction. Due 11/20

Week 9: Algorithmic Accountability – 11/13
Slides.
Our society is woven together by algorithms. From high frequency trading to predictive policing, they regulate an increasing portion of our lives. But these algorithms are mostly secret, black boxes form our point of view. We’re at they’re mercy, unless we learn how to interrogate and critique algorithms.

Required

Recommended

Week 10: Network analysis – 11/20
Slides.
Network analysis (aka social network analysis, link analysis) is a promising and popular technique for uncovering relationships between diverse individuals and organizations. It is widely used in intelligence and law enforcement, but not so much in journalism. We’ll look at basic techniques and algorithms and try to understand the promise — and the many practical problems.

Required

Recommended

Examples:

Assignment: Compare different centrality metrics in Gephi. Due 12/4.

No class 11/27

Week 11: Drawing conclusions from data – 12/4
Slides.
You’ve loaded up all the data. You’ve run the algorithms. You’ve completed your analysis. But how do you know that you are right? It’s incredibly easy to fool yourself, but fortunately, there is a long history of fields grappling with the problem of determining truth in the face of uncertainty, from statistics to intelligence analysis.

Required

  • Correlation and causation, Business Insider
  • The Psychology of Intelligence Analysis, chapters 1,2,3 and 8. Richards J. Heuer

Recommended

Assignment: write a story on the status of women in science. Due 12/18.

Week 12: Privacy, Security, and Censorship – 12/11
Slides.
Who is watching our online activities? How do you protect a source in the 21st Century? Who gets to access to all of this mass intelligence, and what does the ability to survey everything all the time mean both practically and ethically for journalism? In this lecture we will talk about who is watching and how, and how to create a security plan using threat modeling.

Required

Recommended

Assignment: Use threat modeling to come up with a security plan for a given scenario.

Week 13: Tracking flow and impact – 12/18

How does information flow in the online ecosystem? What happens to a story after it’s published? How do items spread through social networks? We’re just beginning to be able to track ideas as they move through the network, by combining techniques from social network analysis and bioinformatics.

Required

Recommended

 Final projects due 12/31  (dual degree Journalism/CS students only)