Visual document mining for journalists

Overview’s response to the Heartbleed security vulnerability

UPDATE: we have installed our new SSL certificates. If you are an Overview user, you should have received and email asking you to reset your password, by clicking on the reset it link on the login form. Please reset your password! If you are concerned that someone may have gained unauthorized access to your documents, we can work you to audit our server logs to see if anyone who wasn’t you used your password.

This completes Overview’s recovery from Heartbleed.

You may have heard that, a few days ago, a serious bug called Heartbleed was discovered in a piece of the software that powers much of the web, including Overview.

This bug could allow an attacker to intercept and decode secured connections to our server, and thereby gain access to your password and then your private documents. Due to the nature of this bug there is no way for us to know if any accounts have been compromised.

We have already upgraded our servers so they do not have this vulnerability. Unfortunately, if anyone compromised our secure connections previously they may still be able to do so. We are working with our provider to get new SSL certificates to fix this problem. We are told this will take a few days.

When this is done, we will send out a mass email asking everyone to reset their password.

We apologize for the inconvenience. It’s a breathtaking bug, and we and the rest of the web are recovering as fast as we can.

 

 

 

VIDEO: What the Overview Project does

Here is my talk from the wonderful Groundbreaking Journalism conference in Berlin last week, plus the panel afterwards. This is a great short introduction to what the Overview Project has done, and where we are going — we see ourselves as a pipeline from the AI research community to usable applications in the social sector.

My talk is 15 minutes, followed by a panel on “What software does journalism need?”

View the same documents in different ways with multiple trees

Starting today Overview supports multiple trees for each document set. That is, you can tell Overview to re-import your documents — or a subset of them — with different options, without uploading them again. You can use this to:

  • Focus on a subset of your documents, such as those with a particular tag or containing a specific word.
  • Use ignored and important words to try sorting your documents in different ways.

You create a new tree using this button next your document set list page:

This brings up a dialog box that looks very similar to the usual import options. You can name the tree (good for reminding yourself why you made it) and set ignored and important words to tell Overview how you want your documents organized in this tree. You can also include only those documents with a specific tag.

To create a tree that contains only words matching a particular search term, first turn your search into a tag using the “create tag from search results” button next to the search box.

Tags are shared between all of the trees created from a document set. That means when you tag a document in one tree, it will be tagged in every other tree. You can try viewing your documents with different trees, tagging in whatever tree is easiest to use.

Who will bring AI to those who cannot pay?

One Sunday night in 2009, a man was stabbed to death in the Brentwood area of Long Island. Due to a recent policy change there was no detective on duty that night, and his body lay uncovered on the sidewalk until morning. Newsday journalist Adam Playford wanted to know if the Suffolk County legislature had ever addressed this event. He read through 7,000 pages of meeting transcripts and eventually found the council talking about it:

the incident in, I believe, the Brentwood area…

This line could not have been found through text search. It does not contain the word “police” or “body,” or the victim’s name or the date, and “Brentwood” matches too many other documents. Playford read the transcripts manually — it took weeks — because there was no other way available to him.

But there is another way, potentially much faster and cheaper. It’s possible for a computer to know that “the incident in Brentwood” refers to the shooting, if it’s programmed with enough contextual information and sophisticated natural language reasoning algorithms. The necessary artificial intelligence (AI) technology now exists. IBM’s Watson system used these sorts of techniques to win at Jeopardy, playing against world champions in 2011.

Last month, IBM announced the creation of a new division dedicated to commercializing the technology they developed for Watson. They plan to sell to “healthcare, financial services, retail, travel and telecommunications.”

Journalism is not on this list. That’s understandable, because there is (comparatively speaking) no money in journalism. Yet there are journalists all over the world now confronted with enormous volumes of complex documents, from leaks and open government programs and freedom of information requests. And journalism is not alone. The Human Rights Data Analysis group is painstakingly coding millions of handwritten documents from the archives of the former Guatemalan national police. UN Global Pulse applies big data for humanitarian purposes, such as understanding the effects of sudden food price increases. The crisis mapping community is developing automated social media triage and verification systems, while international development workers are trying to understand patterns of funding by automatically classifying aid projects.

Who will serve these communities? There’s very little money in these applications; none of these projects can pay anywhere near what a hedge fund or a law firm or intelligence agency can. And it’s not just about money: these humanitarian fields have their own complex requirements, and a tool built for finding terrorists may not work well for finding stories. Our own work with journalists shows that there are significant domain-specific problems when applying natural language processing to reporting.

The good news is that many people are working on sophisticated software tools for journalism, development, and humanitarian needs. The bad news is that the problem of access can’t be solved by any piece of software. Technology is advancing constantly, as is the scale and complexity of the data problems that society faces. We need to figure out how to continue to transfer advanced techniques — like the natural language processing employed by Watson, which is well documented in public research papers — to the non-profit world.

We need organizations dedicated to continuous transfer of AI technology to these underserved sectors. I’m not saying that for-profit companies cannot do this; there may yet be a market solution, and in any case “non-profit” organizations can charge for services (as the Overview Project does for our consulting work.) But it is clear that the standard commercial model of technology development — such as IBM’s billion dollar investment in Watson — will largely ignore the unprofitable social uses of such technology.

We need a plan for sustainable technology transfer to journalism, development, academia, human rights, and other socially important fields, even when they don’t seem like good business opportunities.

Use “important” and “ignored” words to tell Overview how to file your documents

Overview automatically files documents by topic. But you know things that the computer doesn’t, like what’s important for your particular documents. Now you can tell Overview that certain words are important when you import your documents.

This works in combination with the ability to ignore unimportant words. Suppose you’re looking at the White House emails about drilling in the Gulf of Mexico (one of Overview’s example document sets) and you’re specifically interested in environmental topics. You can enter words like “environment” and “environmental” in the important words box, like this:

Here we’ve used the “words to ignore” feature to tell Overview to ignore the names of the two main email writers, because we don’t want to organize emails by who sent them — just their contents. Then we’ve entered “environment” and “environmental” as important words to tell Overview that that’s what we want to look for. Note that we’ve also entered “Environment” and “Environmental” because the important words list is case-sensitive (ignore words are not case sensitive.)

Overview throws out the ignored words, then gives extra weight to any of the important words it finds. Usually it ends up filing all the documents containing important words in their own folder, like this:

Overview doesn’t put all documents containing the important words into their own folder. If a document contains “environment” but is much more closely related to other documents which do not, it will be filed with them instead. (You can always search to see where Overview has filed documents containing a particular word.)

Also, each important word might or might not get its own folder. Overview doesn’t know “environment” and “environmental” have similar meanings, but it does see that the documents containing these words are similar, so it puts them together.

You can also use Java regular expressions to find important words. For example you can create a folder for each all-uppercase ACRONYM by using the expression [A-Z]+.  Even if you’re not using regular expressions, important words are case-sensitive. (This is to make it easier to find names, which are often capitalized. In the future we’ll add a check box to turn this on or off.)

Taken together, ignored and important words are a powerful way to tell Overview how you want certain documents organized, while letting the computer make automatic decisions for the rest.

How big is a document dump?

When journalists end up with a huge stack of documents they need to sort through, how big is that stack? One of the fun things about working on Overview is we get firsthand experience with many of these stories, and get to talk a lot of nerdy shop about the others.

So here’s our casual list of document sets that journalists have had to contend with. I’ve thrown in the links where possible, and a description of how the documents were delivered.

  • U.K. MP expenses – 700,000 documents in 5,500 PDF files, from government website
  • Wikileaks Iraq war data – 391,832 structured records, each including a text descriptions.
  • Wikileaks diplomatic cables – 251,287 cables, each a few pages long
  • Military discharge records – 112,000 assorted files in just about every document file format
  • NSA files leaked by Snowden – 50,000 to 200,000 according to the NSA
  • Wikileaks Afghanistan war data – 91,731 structured records, same format as Iraq data
  • Free the Files – 43,200 political TV ad spending files, PDF scans of paper, from FCC website
  • Paul Ryan correspondence – 9000 pages, on paper, via FOIA request of more than 200 agencies
  • Tulsa PD emails – 8000 emails, in Outlook format, via FOIA request
  • Pentagon Papers – 7000 pages leaked 1973, on paper, now declassified and available
  • Illegal adoption market in U.S. – 5029 messages web-scraped from a Yahoo! forum
  • Iraq security contractor reports – 4,500 pages on paper, via FOIA request
  • North Carolina foodstamp woes – 4,500 pages of emails, on paper, via FOIA request
  • New York State proposed legislation – 1,680 bills, downloaded via government API
  • White House Gulf of Mexico drilling emails – 628 documents, mostly emails, on paper, via FOIA request
  • Dollars for Docs – 65 gigantic disclosure reports, mostly huge PDFs of tables

So how big is the typical document dump? Well, its, ah… how do you measure that? How does a “record” compare to a “page” or an “email”? The first thing I see when I look at this list is the huge variety of file formats, largely because we’ve had to spend so much time helping people with a huge variety of file formats (more on that).

And it’s not just digital file formats. There’s actual paper involved in this business. Paper is a very popular choice for answers to FOIA requests. This is partially a technology problem and partially just that paper is still really good at certain things, like making absolutely 100% sure your redactions cannot be undone and the document metadata has been stripped. And even when you do get an electronic format, you might end up with a single massive PDF with thousands of variable-length emails (in which case do this to load it into Overview.)

But we do have some numbers here, and maybe a page and an email might be about the same-ish amount of work to deal with, so let’s imagine it’s all comparable and call it all pages. Some sets are very large, up to 700,000, but most are in the 5000-10000 page range. I I’ll take a median instead of an average since the distribution is highly skewed, and… 9000.

The most typical size of document dump that journalists have to deal with is 9000 pages. At least, most typical of our collection. Half our cases are larger than that and half are smaller. The largest document sets that journalists work with are in the million range now, and we should expect that to incerase. (See also: how to configure Overview to handle more documents.)

9000 pages would take 150 hours to read through completely at a rate of one per minute, or about 20 eight-hour workdays. The largest document set on this list (the MP expenses) would take almost exactly four years to read if you worked every single day. This is why we need computers.

Keyboard shortcuts in Overview

It’s a little known fact that Overview has several keyboard shortcuts to make navigating through your documents even faster:

  • j, k – view next and previous document in the list.
  • arrow keys — navigate through tree. Selects parent, child, and sibling folders.
  • u — go back to document list, when viewing a single document.

Both of these sets of keys are essential for rapid review. You can select a folder, press j to read the first document (which automatically switches from the document list to the single document view) and then press right arrow to go to the next folder in the tree.

Algorithms are not enough: lessons bringing computer science to journalism

There are some amazing algorithms coming out the computer science community which promise to revolutionize how journalists deal with large quantities of information. But building a tool that journalists can use to get stories done takes a lot more than algorithms. Closing this gap has been one of the most challenging and rewarding aspects of building Overview, and I really think we’ve learned something.

Overview is an open-source tool to help journalists sort through vast troves of documents obtained through open government programs, leaks, and freedom of Information requests. Such document sets can include hundreds of thousands of pages, but you can’t find what you don’t know to search for. To solve this problem, Overview applies natural language processing algorithms to automatically sort documents according to topic and produce an explorable visualization of the complete contents of a document set.

I want to get into the process of going from algorithm to application here, because — somewhat to my surprise — I don’t think this process is widely understood.  The computer science research community is going full speed ahead developing exciting new algorithms, but seems a bit disconnected from what it takes to get their work used. This is doubly disappointing, because understanding the needs of users often shows that you need a different algorithm.

The development of Overview is a story about text analysis algorithms applied to journalism, but the principles might apply to any sort of data analysis system. One definition says data science is the intersection of computer science, statistics, and subject matter expertise. This post is about connecting computer science with subject matter expertise.

The algorithmic honeymoon

In October 2010 I was working at the Associated Press on the recently released Iraq War Logs. AP reporters toiled for weeks with a search engine to find stories within these 391,832 documents. It was painful, and there had to be a better way.

Rather than deciding what to look for I wanted the computer to read the documents and tell me what was interesting. I had a hunch that classic techniques from information retrieval (TF-IDF and cosine similarity) might work here, so I hacked together a proof-of-concept visualization of one month of the Iraq War Logs data using Ruby and Gephi.

And it worked! By grouping similar documents together and coloring them by incident type we were able to see the broad structure of the war. It was immediately clear that most of the violence was between civilians, and we found clusters of events around tanker truck explosions, kidnappings, and specific battles.

A few months later we had a primitive interactive visualization. It was exciting to see the huge potential of text analysis in journalism! This was our algorithmic honeymoon, when the problems were clear and purely technical, and we took big steps with rapid iterations.

But that demo was all smoke and mirrors. It was the result of weeks of hacking at file formats and text processing and gluing systems together and there was no chance anyone but myself could ever run it. It was research code, the place where most visualization and natural language processing technology goes to die. No one attempted to do a story with the system because it wasn’t mature enough to try.

Worse, it wasn’t even clear how you would do a story starting from one of these visualizations. Yes, we could see patterns in the data, but what did those patterns mean and how would we turn them into a story? In retrospect, this uncertainty should have told us that despite our progress in algorithms, we didn’t yet understand the journalism part of the problem.

Getting real work done

The next step was a prototype tool, initially developed by Stephen Ingram at UBC and completed by the end of 2011. This version introduced the topic tree and its folders for the first time. And I had a document set: 4,500 pages of recently declassified reports concerning private security contractors in Iraq. Trying to do a story about these documents taught us a lot about the difference between an algorithm and an application.

The moment I began working with these documents — as a reporter, not a programmer — I discovered that it was stupendously important to have a smooth integrated document viewer. In retrospect it seems obvious that you’ll need to read a lot of documents while doing document mining, but it was easy to forget that sort of thing in the midst of talking about document vectors and topic modeling and fancy visualizations. I  also found that I needed labels to show the size of each cluster, got frustrated at the overly complex tagging model, and implemented more intuitive panning and zooming in the scatterplot window. A few weeks of hacking eventually got me to a system I could use for reporting.

The final story included the results of this document set analysis, reporting from other document sources, and an interview with a State Department official. This was the first time we were able to articulate a reporting methodology: I explored the topic tree from left to right, investigating each cluster and tagging to mark what I’d learned, then followed up with other sources. The aim of the reporting process was to summarize and contextualize the content of a large number of documents. This was a huge step forward for Overview, because it connected the very abstract idea of “patterns in the data” to a finished story. We thought all document set reporting problems would take this form. We were wrong.

Just as the proof-of-concept was research code, the prototype was the kind of code you write on deadline to get the story done by any means necessary. The data journalism community often writes and releases code written for a single story. This is valuable because it demonstrates a technique and might even provide building blocks for future stories, but it’s usually not a finished tool that other people can easily use.

We learned this lesson vividly when we tried to get people to install the Overview prototype. Git clone, run a couple of shell scripts to load your documents, how hard could it be? It turned out to be very hard. At NICAR 2012 I led a room full of people through installing Overview and loading up a sample file. We had every type of problem: incompatible versions of git, Ruby, and Java; operating system differences; and lots of people who had never used a command line before. Of 20 people who tried, only 3 got the system working. We were beginning to make contact with our user community.

Usability trumps algorithm

We re-wrote Overview as a web application to solve our installation woes (largely the work of Jonas Karlsson and Adam Hooper). We also dropped the scatterplot visualization, the visualization that we had started with, because log data and user interviews showed no one was using it. We went all-in on the tree and had a deployed system by the end of 2012.

Do you understand what is happening in this screenshot? Is it clear to you that the window on the lower left is a list of documents, each represented a line of extracted keywords? It wasn’t obvious to our users either, and no one used this new system for many months.

We knew that Overview was useful, because we and others had done stories with the prototype. But we were now expecting new people to come in fresh and learn to use the system without our help. That wasn’t happening. So we did think-aloud usability testing to find out why. We immediately discovered a number of serious usability problems. People were having a hard time getting their documents into Overview. They didn’t understand the document list interface. They didn’t understand the tree.

We spent months overhauling the UI. We hired a designer and completely rebuilt the document list. And based on user feedback, we changed the clustering algorithm.

During the prototype phase we had developed a high-performance document clustering algorithm based on preferentially sampling the edges between highly similar documents and building connected components, documented in this technical report. We were very proud of it. But it tended to produce long lists of very small clusters, meaning that each folder in the tree could have hundreds of sub-folders. This was a terrible way to navigate through a tree.

So we replaced our fancy clustering with the classic k-means technique. We apply this recursively, splitting each folder into at most five sub-folders according to an adaptive algorithm.The resulting tree is not as faithful to the structure of the data as our original clustering algorithm, but that doesn’t matter. Overview’s visualization is for humans, not machines. The point is not to have a hyper-accurate representation of the data, but a representation that users are able to interpret and trust. For this reason, it was absolutely necessary to be able to explain how Overview’s clustering algorithm works for a non-technical audience.

What do journalists actually do with documents?

We solved our usability problems by the summer of 2013 and journalists began to use our system; we’ve had a great crop of completed stories in the last six months. And as we gained experience we finally began to understand what journalists actually do with a set of documents. We have seen four broad types of document-driven stories, and only one of them is the “summarize these documents” task we originally thought we wanted to support. In other cases the journalist is looking for something specific, or needs to classify and tag every document, or is looking to separate the junk from the gold.

Today we have a solid connection to our users and their problems.  Our users are generally not full-time data journalists and have typically never seen a command line. They get documents in every conceivable format, from paper to PDF. Even when the material is provided in electronic form it may need OCR, or the files may need to be split into their original documents.  Our users are on deadline and therefore impatient: Overview’s import must be extremely quick or reporters will give up and start reading their documents manually. And each journalist might only use Overview once a year when a document-driven story comes their way, which means the software cannot require any special training.

We learned what journalists actually wanted do, and we implemented features to do it. We implemented fuzzy search to help find things in error-prone OCR’d material. We added an easy way to show the documents that don’t yet have tags for those projects where you really do need to read every page. And Overview now supports multiple languages and lets you customize the clustering. We are still working on handling a wide range of import formats and scenarios including integrated OCR.

This is what the UI looks like today.

Algorithms are not enough

Overview began when we saw that text analysis algorithms could be applied to journalism. We originally envisioned a system for stringing together algorithmic building blocks, a concept we called a visualization sketching system. That idea was totally wrong, because it was completely disconnected from real users and real work. It was a technologist’s fantasy.

Unfortunately, it appears that much of the natural language processing, machine learning, and visualization community is stuck in a world without people. The connection between the latest topic modeling paper and the needs of potiential users is weak at best. Such algorithms are evaluated by abstract statistical scores such as entropy or precision-recall curves, which do not capture critical features such as whether the output makes any sense to users. Even when such topic models are built into exploratory visualization systems (like this and this) the research is typically disconnected from any domain problem. While it seems very attractive to build a general system, this approach risks ignoring all real applications and users. (And the test data is almost always academic papers or news archives, both of which are unrealistically clean.) We are seeing ever more sophisticated technique but very little work that asks what makes one approach “better” than another, which is of course highly dependent on the application domain.

There is a growing body of work that recognizes this. There is work on designing interpretable text visualizations, research which compares document similarity algorithms to human ratings, and evolving metrics for topic quality that have been validated by user testing.  See also our discussion of topic models and XKCD. And we are beginning to see advanced visualization systems evaluated with real users on real work, like this.

It’s also important to remember that manual methods are valuable. Reporters will spend days reading and annotating thousands of pages because they are in the business of getting stories done. Machine learning might help with categorize-and-count tasks, but the computer is going to make errors that may compromise the accuracy of the story, so the journalist must review the output anyway. The best system would start with  a seamless UI for manual review and tagging, then add a machine learning boost. This is exactly what we are pursuing for Overview.

Our recommendation to technologists of all stripes is this: get out more. Don’t design for everyone but for specific users. Make the move from algorithmic research to the anything-goes world of getting the work done. Optimize your algorithms against user needs rather than abstract metrics; those needs will be squishy and hard to measure, but they’ll lead you to reality. Test with real data, not clean data. Finish your users’ projects, not your projects. Only then will you understand enough to know what algorithms you need, and how to build them into a killer app.

What do journalists do with documents? The different kinds of document-driven stories

Large sets of documents have been central to some of the biggest stories in journalism, from the Pentagon Papers to the Enron emails to the NSA files. But what, exactly, do journalists do with all these documents when they get them? In the course of developing Overview we’ve talked to a lot of journalists about their document mining problems, and we’ve discovered that there are several recurring types of document-driven story.

The smoking gun: searching for specific documents

In this type of work the journalist is trying to find a single document, or a small number of documents, that make the story. This might be the memo that proves corruption, the purchase order that shows money changed hands, or the key paragraph of an unsealed court transcript.

Jarrel Wade’s story about the Tulsa police department spending millions of dollars on squad car computers that didn’t work properly began with a tip. He knew there was an ongoing internal investigation, but little more until his FOIA request returned almost 7000 pages of emails. His final story rested on a dozen key documents he discovered by using Overview to rapidly and systematically review every page.

This story demonstrates several recurring elements of smoking gun stories. Wade had a general idea what the story was about, which is why he asked for the documents in the first place. But he didn’t know exactly what he was looking for, which makes text search difficult. Keyword search can also fail if there are many different ways to describe what you’re looking for, or if you need to look for a word that has several meanings — imagine searching for “can” meaning container, and finding that almost every document contains “can” meaning able. Even worse, OCR errors can prevent you from finding key documents, which is why Overview supports fuzzy search.

The trend story: getting the big picture

As opposed to a smoking gun story where only specific documents matter, a trend story is about broad patterns across many documents. A comprehensive analysis of a comprehensive set of documents makes a powerful argument.

For my story about private security contractors in Iraq I wanted to go beyond the few high-profile incidents that had made headlines during the height of the war. I used Overview to analyze 4,500 pages of recently declassified documents from the U.S. Department of State in order to understand the big picture questions. What were the day-to-day activities of armed private security contractors in Iraq? What kind of oversight did they have? Did contractors frequently injure civilians, or was it rare?

Overview showed the broad themes running across many documents in this unique collection of material. Combined with searches for specific types of incidents and a random sampling technique to back my claims with numbers, I was able to tell a carefully documented big picture story about this sensitive issue.

Categorize and count: turning documents into data

Some trend stories depend on hard numbers: of 367 children in foster care, 213 were abused. 92% of the complaints concerned noise. The state legislature has never once proposed a bill to address this problem. This type of story involves categorizing every document according to some scheme. Both the categories you decide to use and the number of documents in each category can be important parts of the story.

For their comprehensive report on America’s underground market for adopted children, Ryan McNeill, Robin Respaut and Megan Twohey of Reuters analyzed more than 5000 messages from a Yahoo discussion group spanning a five year period.  They created an infographic summarizing their discoveries: 261 different children were advertised over that time, from 34 different states. Over 70% of these children had been born abroad, in at least 23 different countries. They also documented the number of cases where children were described as having behavioral disorders or being victims of physical or sexual abuse. When combined with narratives of specific cases, these figures tell a powerful story about the nature and scope of the issue.

Overview’s tagging interface is well suited to categorize-and-count tasks. Even better, it can take much of the drudgery out of this type of work because similar documents are automatically grouped together. When you’re done you can export your tags to produce visualizations. We are planning to add machine learning techniques to Overview so that you can teach the system how you want your documents tagged.

Wheat vs. chaff: filtering out irrelevant material

Sometimes a journalist gets ahold of a lot of potentially good material, but you can’t publish potential. Some fraction of the documents are interesting, but before a reporter can report they have to find that fraction.

In the summer of 2012 Associated Press reporter Jack Gillum used FOIA requests to obtain over 9,000 pages of then-VP nominee Paul Ryan’s correspondence with over 200 Federal agencies. He used Overview to find the material that Ryan had written himself, as opposed to the thousands of pages of attachments and other supporting documents. By analyzing Ryan’s letters he was able to show that the Congressman was privately requesting money from many of the same government programs he had publicly criticized as wasteful.

This type of analysis is somewhere between the smoking gun and trend stories: first find the interesting subset of documents, then report on the trends across that subset. Overview was able to sort all of Ryan’s correspondence into a small number of folders because it recognized that the boilerplate language on his letterhead was shared across many documents.

What do you need to do with your documents?

These are the patterns we’ve seen, but we’re also discovered that there are huge variations. Every story has unique aspects and challenges. Have you done an interesting type of journalistic document set analysis? Do you have a problem that you’re hoping Overview can solve? Contact us.

Advanced search: quoted phrases, boolean operators, fuzzy matching, and more

Overview now supports advanced syntax in the search field, like this:

This gives you an enormous amount of control over finding specific documents.

  • Use AND, OR and NOT to find all documents with a specific combination of words or phrases.
  • Use quotes to search for multiple word phrases like “disaster recovery” or “nacho cheese”.
  • Use ~ after any word to do a fuzzy search, like “Smith~”. This will match all words with up to two characters added, deleted, or changed. Great for searching through material with OCR errors.
  • By default, multiple words are now ANDed together if you don’t specify anything else.
  • Use “title:foo” to search for all documents with the word “foo” in the title (the title is the upload filename, or the contents of the title column if you imported from CSV)
  • You can use wildcards like * and ? or even full regular expressions using the syntax text:/regex/ or title:/regex/. Note that regular expressions cannot have spaces in them, because you are actually searching through the index of words used in the documents, not the original text.

There are actually many more things you can do with this powerful search tool. See the ElasticSearch advanced query syntax for details.