SQuAD

The Stanford Question Answering Dataset

What is SQuAD?

Stanford Question Answering Dataset (SQuAD) is a new reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage. With 100,000+ question-answer pairs on 500+ articles, SQuAD is significantly larger than previous reading comprehension datasets.

Leaderboard

Since the release of our dataset (and paper), the community has made rapid progress! Here are the ExactMatch (EM) and F1 scores of the best models evaluated on the test and development sets of v1.1. Will your model outperform humans on the QA task?

ModelTest EMTest F1Dev EMDev F1
Human Performance
(Stanford University)
(Rajpurkar et al. '16)
82.391.281.491.0
r-net
(Microsoft Research Asia)
TBDTBD65.875.0
Co-attention
(Allen Institute for Artificial Intelligence & University of Washington)
61.872.562.272.6
No Chunker and Neural Chunk Ranker with Fine Attention
(IBM)
TBDTBD61.870.7
Match-LSTM with Ans-Ptr (Boundary)
(Singapore Management University)
(Wang & Jiang '16)
60.570.759.470.0
Trained Chunker and Neural Chunk Ranker
(IBM)
TBDTBD58.768.4
Match-LSTM with Ans-Ptr (Sequence)
(Singapore Management University)
(Wang & Jiang '16)
54.567.754.868.0
Attention and Chunking Single Model
(IBM)
TBDTBD48.064.5
Logistic Regression Baseline
(Stanford University)
(Rajpurkar et al. '16)
40.451.039.851.0

Getting Started

We've built a few resources to help you get started with the dataset.

To get a feel of the dataset, you can explore it visually.

Explore SQuAD

Download a copy of the dataset (distributed under the CC BY-SA 4.0 license):

To evaluate your models, we have also made available the evaluation script we will use for official evaluation, along with a sample prediction file that the script will take as input. To run the evaluation, use python evaluate-v1.1.py <path_to_dev-v1.1> <path_to_predictions>.

Once you have a built a model that works to your expectations on the dev set, you submit it to get official scores on the dev and a hidden test set. To preserve the integrity of test results, we do not release the test set to the public. Instead, we require you to submit your model so that we can run it on the test set for you. Here's a tutorial walking you through official evaluation of your model:

Submission Tutorial

Because SQuAD is an ongoing effort, we expect the dataset to evolve.

To keep up to date with major changes to the dataset, please subscribe:

Have Questions?

Ask us questions at our google group or at pranavsr@stanford.edu.

Star on Github