Overview can now read most file formats directly

Previously, Overview could only read PDF files. (You can also import all documents in a single CSV file, or import a project from DocumentCloud.)

Starting today, you can directly upload documents in a wide variety of file formats. Simply add the files — or entire folders — using the usual file upload page.

Note that the “Add all files in a folder” button is only available when you are using the Google Chrome browser, due to limitations in browser support for this feature.

Overview will automatically detect the file type and extract the text. Your document will be displayed as a PDF in your browser when you view it. Overview supports a wide variety of formats, including:

  • PDF
  • HTML
  • Microsoft Word (.doc and .docx)
  • Microsoft PowerPoint (.ppt and .pptx)
  • plain text, and also rich text (.rtf)

For a full list, see the file formats that LibreOffice can read.

Stupid Tag Tricks

Overview’s tags are very powerful, but it may not obvious how to use them best. Here’s a collection of tagging tricks that have been helpful to our users, from Overview developer Jonas Karlsson.

Tracking documents to review

After you have reviewed a set of documents, tag them with “reviewed” in addition to any other tags you might want, such as “interesting” or “follow up.” Then you can instantly see what you have not reviewed by using the Show Untagged button

If you are working together with other people to review documents, you can add a tag called “In review by XY” when you start reviewing a folder. When review is complete, add the “Reviewed by XY” tag, and remove the “In review” tag. If the documents being reviewed by different team members overlap, these tags will make it easier to avoid duplicate work.

Grouping Tags

Tags are sorted alphabetically. To group tags, start the tag name with the same letter or punctuation: “+ a”, “+ x”, “* b”, “* y”, “* z”

Color code tags with the same or similar colors, to indicate similar concepts.  Use long tag names, to make selection less error prone (no accidental hitting the + or – buttons).

To change tag colors or names, open the Organize Tags dialog box by clicking on the link at the bottom of the tags pane, then click on the tag name or color to change.

 

Create a visualization from your tags

See the Export this list as a spreadsheet link at the top of  the Organize Tags dialog box? That will produce a CSV which lists how many documents have each tag, like this:

You can load this data into another program to visualize it. This is how Mick Conroy of TemperoUK created this analysis of the social media conversation around drones, by importing the tag data into this visualization software.

Tag all documents that do not contain tag “abc”:

  1. Tag all documents with a new tag “Not abc” (by selecting the top of the tree)
  2. Select the “abc” tag
  3. Remove the “Not abc” tag from the selected documents (click on the ‘-’ on the “Not abc” tag)

Tag all documents that have tags “a” OR “b” OR “c”

  1. Select tag “a”
  2. Create a new tag “a or b or c”. All “a” documents should now have this tag.
  3. Select tag “b”
  4. Add the “a or b or c” tag (click on the ‘+’ on the “a or b or c” tag)
  5. Select tag “c”
  6. Add the “a or b or c” tag.

Tag all documents that have tags “a” AND “b” AND “c”

  1. Using the first procedure above, creates tags for “not a”, “not b”, and “not c”
  2. Using the second procedure above, create a “not a OR not b OR not c” tag.
  3. Using the first procedure above, crete “Not (not a OR not b OR not c)”, calling it “a and b and c”

 

 

Importing documents from a CSV file

There are different ways to get your documents into Overview. This post is about loading documents into Overview using the file CSV file format, and the format that Overview expects.

The quick answer

Overview expects all documents in a single CSV file, one document per row, plus a header row. Overview can read the following columns:

  • text — this is the only required column, and must contain the document text.
  • title — This is displayed when viewing the document. Documents are sorted by title in the document list.
  • url — If the URL begins with https Overview will display the page when viewing the document. Otherwise Overview will display a source link.
  • id — ignored by Overview, but saved when the document set is exported, so you can match against other tables.
  • tags — a comma-separated list of tags applied to each document. Great for comparing text to data.

The “text” column must exist. (It may also be named “snippet” or “contents” for compatibility with files exported by Radian 6 and Sysomos). All other columns are optional.

Need more details? Read on.

Creating a CSV file

Many programs can save a CSV file.  For example, Excel can save a spreadsheet as a CSV file. Be sure to include the column names in the first row. You can also export a CSV file from a MySQL database.

You can also use Overview’s docs2csv script to turn almost any random collection of files into a CSV Overview can read. It will also automatically OCR if needed.

Overview can probably read just about any CSV file created with any program, if you can ensure that the header is right. You may need to rename existing columns to the names that Overview expects, or you may need to add the header row entirely because some programs do not write a header. To do this, edit the CSV file with any text editor such as Notepad or TextEdit — but not Microsoft Word, as you will probably break the CSV file if you try to edit it in a word processor.

What a CSV file for Overview should look like

A CSV file is simply a list of “comma-separated values,” organized into rows and columns, like a spreadsheet or a table. The file starts with a list of the column names, separated by commas. This is followed by each row of data, one row per line, with the values for each column again separated by commas. Overview only requires one column, which much be named “text.” Here is an example file:

text
This is the content of the first document.
And here is the text of document the second
Document three talks about quick brown foxes.
.
.

If the text of a document spans multiple lines, or itself contains commas, then it needs to be quoted. Quotes inside a quoted document must be “escaped” by turning them into double quotes. This is all standard CSV stuff, and any program or library that writes CSVs should do it automatically.

text
"This document is really long and crosses multiple lines and contains
commas, which is why it is quoted."
"This is the second document. I'd like to say ""Hi!"" to everyone
to show how to put quotes inside a quoted document. The text of
this document can cross as many lines as needed, or even contain
blank lines like this:

The second document ends with this final quote."
The third document fits all on one line so no quotes needed.
"The fourth document has a comma in it, so it's quoted too."
.
.

Overview will display the text in the viewer pane when you click on each document. If you want to display something else for the document, you can add a “url” column which tells Overview to load a particular web page when you view that document. For security reasons, this must be an https URL. Here’s an example using tweets:

text,url
New deploy today -- cleaner clustering, better handling of larger document sets. Anyone got a pile of PDFs they want to look at? Try it!,https://twitter.com/overviewproject/status/281075194557259777
"""“I’m not going to sit out on the newsroom floor and sort pages into stacks of documents"" ~@jackgillum on need for document mining software.",https://twitter.com/overviewproject/status/264450385928929280
.
.

There are three more columns that Overview can read. You can add a “title” to each document, which Overview displays in the document list. Documents are sorted by title, so this is a way to control the order that documents are listed.  You can add a unique ID column, simply named “id”, which Overview will read and associate with the document, and export with the document set. And you can add a comma separated list of “tags” if you want to import documents with tags already applied, or want to compare text to data.

The columns can be in any order; all that matters is that the order of the column names matches the order of the data.

When Overview exports a document set, it writes out id, text, title, url, and tags fields.

Uploading your CSV file to Overview

First, select the upload option from the document set list page:

Then choose a file. Overview will show a preview and do some basic checks to ensure that the format is OK. It should look like this:

After these checks you will see the usual import options, then a preview of the file contents. You may need tell Overview what character encoding the file uses. Try changing this if you see funny square characters in the preview, or accents aren’t displaying right. Then hit upload, and away we go. You can use Overview as usual on the document set.

The document mining Pulitzers

Four of the winners and finalists of the 2014 Pulitzer prize in journalism were based on reporting from a large volume of documents. Journalists have relied on bulk documents since long before computers — as in the groundbreaking work of I. F. Stone — but document-driven reporting has blown up over the last few years, a confluence of open data, better technology, and the era of big leaks.

What I find fascinating about these stories is that they show such different uses and workflows for document mining in journalism. Several of them suggest directions that Overview should go.

Overview used for Pulitzer finalist story

We were delighted to learn that Overview was used on the sole finalist in the Public Service category, an investigation into New York state secrecy laws that hide police misconduct. Adam Playford and Sandra Peddie of Newsday reported the series over many months, and Playford wrote about his use of Overview for Investigative Reporters and Editors:

We knew early in our investigation of Long Island police misconduct that police officers had committed dozens of disturbing offenses, ranging from cops who shot unarmed people to those who lied to frame the innocent. We also knew that New York state has some of the weakest oversight in the country.

What we didn’t know was if anyone had ever tried to change that. We suspected that the legislature, which reaps millions in contributions from law enforcement unions, hadn’t passed an attempt to rein in cops in years. But we needed to know for sure, and missing even one bill could change the story drastically.

Luckily, I’d been playing with Overview, a Knight Foundation-funded Associated Press project that highlights patterns within piles of documents. Overview simplified my task greatly — letting me do days’ worth of work in a few hours.

Almost instantly, Overview scanned the full text of all 1,700 bills and created a visualization that split the bills into dozens of groups based on the most unique words that appeared in each bill. This gave me an easy way to skim through the bills in each group by title.

Ultimately, Playford and Peddie were able to prove that lawmakers had never addressed the problem. This is a tremendous story, exactly the sort of classic accountability reporting that journalism is supposed to be about.

The document mining process is particularly interesting because the reporters needed to prove that something was not in the documents, which is not a task that we envisioned when we began building Overview. But because Overview clusters similar documents, they were able to complete an exhaustive review much more quickly because they could often discard obviously unrelated clusters.

The Snowden files: software is an obvious win

The winner in the Public Service category was a blockbuster: The Guardian and The Washington Post won for their ongoing coverage of the NSA documents. Snowden has never publicly said exactly how many documents he took with him, but The Guardian says it received 58,000 at one point. This story typifies a new breed of document-driven reporting of that has emerged in the last few years: a large leak, already in electronic form, with very little guidance about what the stories might be.

Finding what you don’t know to look for is an especially tricky problem with such unasked-for archives — as opposed to the result from a large FOIA request, where the journalist knows why they asked for the documents. Clearly you need software of some sort to do reporting on this type of material.

The Al-Qaida papers: we need a workflow breakthrough

Rukmini Callimachi of the Associated Press was a finalist for her stories based on a cache of Al-Qaida documents found in the remote town of Timbuktu, Mali. Thousands of documents were found strewn through a building that the fighters had occupied for more than a year. Unlike the digital Snowden files these were paper materials, several trash bags full of them, which had to be painstakingly collected, cataloged, scanned, and translated.

Through these documents we’ve learned of Al-Qaida’s shifting strategy in Africa, their tips for avoiding drones, and that members must file expense reports.

We had a chance to talk through the reporting process with Callimachi, and we’ve come to the conclusion that the bottleneck in such “random pile of paper” stories is the preservation, scanning and translation process. The reporters on the similarly incredible Yanuleaks documents — thrown into a lake by ousted Ukrainian president Viktor Yanukovych — face similar challenges. I still believe in the power of good software to accelerate these types of stories, but we need a breakthrough in workflow and process, rather than language analysis algorithms. Could we integrate scanning and translation into a web app? Maybe using a phone scanning stand, and a combination of computer and crowdsourced translation?

America’s underground adoption market: counting cases

The final document-driven Pulitzer finalist covered a black market for adopted children. Megan Twohey of Reuters analyzeed 5000 posts from a newsgroup where parents traded children adopted from abroad.

I’ve started calling this kind of analysis a categorize and count story. Reuters reporters created an infographic to go with their story, summarizing their discoveries: 261 different children were advertised over that time, from 34 different states. Over 70% of these children had been born abroad, in at least 23 different countries. They also documented the number of cases where children were described as having behavioral disorders or being victims of physical or sexual abuse.

When combined with narratives of specific cases, these figures tell a powerful story about the nature and scope of the issue.

5000 posts is a lot to review manually, but that’s exactly what Twohey and her collaborators did. Two reporters independently read each post and recorded the details in a database, including the name and age of the child, the email address of the post author, and other information. After  a lengthy cleaning process they were able to piece together the story of each child, sometimes across many years and different poster pseudonyms.

There may be an opportunity here for information extraction or machine learning algorithms: given a post, extract the details such as the child’s name, and try to determine whether the text mentions other information such as previous abuse.

But no one has really tried applying machine learning to journalism in this way. We hope to add semi-automated document classification features to Overview later this year, because it’s a problem that we see reporters struggling with again and again.

We’ll see more of this

I’m going to end with a guess: we’ll see several document-driven stories in next year’s Pulitzers, because we’ll see many more such stories in journalism in general. I make this prediction by looking at several long term trends. Broadly speaking, the amount of data in the world is rapidly increasing. Open data initiatives offer a particular trove for journalists because they are often politically relevant, and governments across the world are getting better at responding to Freedom of Information Requests (even in China.) At the same time, we’ve entered the era of the mega-leak: an entire country’s state secrets can now fit on a USB drive.

Much of this flood is unstructured data: words instead of numbers. Some technologists argue that we can and should eventually turn all of this into carefully coded databases of events and assertions. While such structuring efforts can be very valuable, we can expect that unstructured text data will always be with us because of its unique flexibility. Emails, instant messages, open-ended survey questions, books: ordinary human language is the only format that seems capable of expressing the complete range of human experience, and that is what journalism is ultimately about.

Meanwhile the technology for reporting on bulk unstructured material is improving rapidly. Overview is a part of that, and we’re aiming specifically at users in journalism and other traditionally under-resourced social fields. Between greater data availability and better tools, I have to imagine that we’ll see a lot more document-driven journalism in the future. To my eye, this year’s Pulitzers are a reflection of larger trends.

[Updated 2014-5-6 with a more accurate description of the reporting process for the Reuters story.]

 

 

Overview’s response to the Heartbleed security vulnerability

UPDATE: we have installed our new SSL certificates. If you are an Overview user, you should have received and email asking you to reset your password, by clicking on the reset it link on the login form. Please reset your password! If you are concerned that someone may have gained unauthorized access to your documents, we can work you to audit our server logs to see if anyone who wasn’t you used your password.

This completes Overview’s recovery from Heartbleed.

You may have heard that, a few days ago, a serious bug called Heartbleed was discovered in a piece of the software that powers much of the web, including Overview.

This bug could allow an attacker to intercept and decode secured connections to our server, and thereby gain access to your password and then your private documents. Due to the nature of this bug there is no way for us to know if any accounts have been compromised.

We have already upgraded our servers so they do not have this vulnerability. Unfortunately, if anyone compromised our secure connections previously they may still be able to do so. We are working with our provider to get new SSL certificates to fix this problem. We are told this will take a few days.

When this is done, we will send out a mass email asking everyone to reset their password.

We apologize for the inconvenience. It’s a breathtaking bug, and we and the rest of the web are recovering as fast as we can.

 

 

 

VIDEO: What the Overview Project does

Here is my talk from the wonderful Groundbreaking Journalism conference in Berlin last week, plus the panel afterwards. This is a great short introduction to what the Overview Project has done, and where we are going — we see ourselves as a pipeline from the AI research community to usable applications in the social sector.

My talk is 15 minutes, followed by a panel on “What software does journalism need?”

View the same documents in different ways with multiple trees

Starting today Overview supports multiple trees for each document set. That is, you can tell Overview to re-import your documents — or a subset of them — with different options, without uploading them again. You can use this to:

  • Focus on a subset of your documents, such as those with a particular tag or containing a specific word.
  • Use ignored and important words to try sorting your documents in different ways.

You create a new tree using the “New Tree” link above the tree:

This brings up a dialog box that looks very similar to the usual import options. You can name the tree (good for reminding yourself why you made it) and set ignored and important words to tell Overview how you want your documents organized in this tree. You can also include only those documents with a specific tag.

To create a tree that contains only words matching a particular search term, first turn your search into a tag using the “create tag from search results” button next to the search box.

Tags are shared between all of the trees created from a document set. That means when you tag a document in one tree, it will be tagged in every other tree. You can try viewing your documents with different trees, tagging in whatever tree is easiest to use.

After you create a tree, you can get information about what you created by clicking the little (i) on the tab for that tree:

 

Who will bring AI to those who cannot pay?

One Sunday night in 2009, a man was stabbed to death in the Brentwood area of Long Island. Due to a recent policy change there was no detective on duty that night, and his body lay uncovered on the sidewalk until morning. Newsday journalist Adam Playford wanted to know if the Suffolk County legislature had ever addressed this event. He read through 7,000 pages of meeting transcripts and eventually found the council talking about it:

the incident in, I believe, the Brentwood area…

This line could not have been found through text search. It does not contain the word “police” or “body,” or the victim’s name or the date, and “Brentwood” matches too many other documents. Playford read the transcripts manually — it took weeks — because there was no other way available to him.

But there is another way, potentially much faster and cheaper. It’s possible for a computer to know that “the incident in Brentwood” refers to the shooting, if it’s programmed with enough contextual information and sophisticated natural language reasoning algorithms. The necessary artificial intelligence (AI) technology now exists. IBM’s Watson system used these sorts of techniques to win at Jeopardy, playing against world champions in 2011.

Last month, IBM announced the creation of a new division dedicated to commercializing the technology they developed for Watson. They plan to sell to “healthcare, financial services, retail, travel and telecommunications.”

Journalism is not on this list. That’s understandable, because there is (comparatively speaking) no money in journalism. Yet there are journalists all over the world now confronted with enormous volumes of complex documents, from leaks and open government programs and freedom of information requests. And journalism is not alone. The Human Rights Data Analysis group is painstakingly coding millions of handwritten documents from the archives of the former Guatemalan national police. UN Global Pulse applies big data for humanitarian purposes, such as understanding the effects of sudden food price increases. The crisis mapping community is developing automated social media triage and verification systems, while international development workers are trying to understand patterns of funding by automatically classifying aid projects.

Who will serve these communities? There’s very little money in these applications; none of these projects can pay anywhere near what a hedge fund or a law firm or intelligence agency can. And it’s not just about money: these humanitarian fields have their own complex requirements, and a tool built for finding terrorists may not work well for finding stories. Our own work with journalists shows that there are significant domain-specific problems when applying natural language processing to reporting.

The good news is that many people are working on sophisticated software tools for journalism, development, and humanitarian needs. The bad news is that the problem of access can’t be solved by any piece of software. Technology is advancing constantly, as is the scale and complexity of the data problems that society faces. We need to figure out how to continue to transfer advanced techniques — like the natural language processing employed by Watson, which is well documented in public research papers — to the non-profit world.

We need organizations dedicated to continuous transfer of AI technology to these underserved sectors. I’m not saying that for-profit companies cannot do this; there may yet be a market solution, and in any case “non-profit” organizations can charge for services (as the Overview Project does for our consulting work.) But it is clear that the standard commercial model of technology development — such as IBM’s billion dollar investment in Watson — will largely ignore the unprofitable social uses of such technology.

We need a plan for sustainable technology transfer to journalism, development, academia, human rights, and other socially important fields, even when they don’t seem like good business opportunities.

Use “important” and “ignored” words to tell Overview how to file your documents

Overview automatically files documents by topic. But you know things that the computer doesn’t, like what’s important for your particular documents. Now you can tell Overview that certain words are important when you import your documents.

This works in combination with the ability to ignore unimportant words. Suppose you’re looking at the White House emails about drilling in the Gulf of Mexico (one of Overview’s example document sets) and you’re specifically interested in environmental topics. You can enter words like “environment” and “environmental” in the important words box, like this:

Here we’ve used the “words to ignore” feature to tell Overview to ignore the names of the two main email writers, because we don’t want to organize emails by who sent them — just their contents. Then we’ve entered “environment” and “environmental” as important words to tell Overview that that’s what we want to look for. Note that we’ve also entered “Environment” and “Environmental” because the important words list is case-sensitive (ignore words are not case sensitive.)

Overview throws out the ignored words, then gives extra weight to any of the important words it finds. Usually it ends up filing all the documents containing important words in their own folder, like this:

Overview doesn’t put all documents containing the important words into their own folder. If a document contains “environment” but is much more closely related to other documents which do not, it will be filed with them instead. (You can always search to see where Overview has filed documents containing a particular word.)

Also, each important word might or might not get its own folder. Overview doesn’t know “environment” and “environmental” have similar meanings, but it does see that the documents containing these words are similar, so it puts them together.

You can also use Java regular expressions to find important words. For example you can create a folder for each all-uppercase ACRONYM by using the expression [A-Z]+.  Even if you’re not using regular expressions, important words are case-sensitive. (This is to make it easier to find names, which are often capitalized. In the future we’ll add a check box to turn this on or off.)

Taken together, ignored and important words are a powerful way to tell Overview how you want certain documents organized, while letting the computer make automatic decisions for the rest.

How big is a document dump?

When journalists end up with a huge stack of documents they need to sort through, how big is that stack? One of the fun things about working on Overview is we get firsthand experience with many of these stories, and get to talk a lot of nerdy shop about the others.

So here’s our casual list of document sets that journalists have had to contend with. I’ve thrown in the links where possible, and a description of how the documents were delivered.

  • U.K. MP expenses – 700,000 documents in 5,500 PDF files, from government website
  • Wikileaks Iraq war data – 391,832 structured records, each including a text descriptions.
  • Wikileaks diplomatic cables – 251,287 cables, each a few pages long
  • Military discharge records – 112,000 assorted files in just about every document file format
  • NSA files leaked by Snowden – 50,000 to 200,000 according to the NSA
  • Wikileaks Afghanistan war data – 91,731 structured records, same format as Iraq data
  • Free the Files – 43,200 political TV ad spending files, PDF scans of paper, from FCC website
  • Paul Ryan correspondence – 9000 pages, on paper, via FOIA request of more than 200 agencies
  • Tulsa PD emails – 8000 emails, in Outlook format, via FOIA request
  • Pentagon Papers – 7000 pages leaked 1973, on paper, now declassified and available
  • Illegal adoption market in U.S. – 5029 messages web-scraped from a Yahoo! forum
  • Iraq security contractor reports – 4,500 pages on paper, via FOIA request
  • North Carolina foodstamp woes – 4,500 pages of emails, on paper, via FOIA request
  • New York State proposed legislation – 1,680 bills, downloaded via government API
  • White House Gulf of Mexico drilling emails – 628 documents, mostly emails, on paper, via FOIA request
  • Dollars for Docs – 65 gigantic disclosure reports, mostly huge PDFs of tables

So how big is the typical document dump? Well, its, ah… how do you measure that? How does a “record” compare to a “page” or an “email”? The first thing I see when I look at this list is the huge variety of file formats, largely because we’ve had to spend so much time helping people with a huge variety of file formats (more on that).

And it’s not just digital file formats. There’s actual paper involved in this business. Paper is a very popular choice for answers to FOIA requests. This is partially a technology problem and partially just that paper is still really good at certain things, like making absolutely 100% sure your redactions cannot be undone and the document metadata has been stripped. And even when you do get an electronic format, you might end up with a single massive PDF with thousands of variable-length emails (in which case do this to load it into Overview.)

But we do have some numbers here, and maybe a page and an email might be about the same-ish amount of work to deal with, so let’s imagine it’s all comparable and call it all pages. Some sets are very large, up to 700,000, but most are in the 5000-10000 page range. I I’ll take a median instead of an average since the distribution is highly skewed, and… 9000.

The most typical size of document dump that journalists have to deal with is 9000 pages. At least, most typical of our collection. Half our cases are larger than that and half are smaller. The largest document sets that journalists work with are in the million range now, and we should expect that to incerase. (See also: how to configure Overview to handle more documents.)

9000 pages would take 150 hours to read through completely at a rate of one per minute, or about 20 eight-hour workdays. The largest document set on this list (the MP expenses) would take almost exactly four years to read if you worked every single day. This is why we need computers.