Classify and extract text 10x better and faster 🦾


➡️  Learn more

WikiSplit Dataset

Created by Botha et al. at 2018, the WikiSplit Dataset contains 1 million English sentences, each split into two sentences that together preserve the original meaning, extracted from Wikipedia edits., in English language. Containing 1M in TSV file format.

Dataset Sources

Here you can download the WikiSplit dataset in TSV format.

Download WikiSplit dataset TSV files

Fine-tune with WikiSplit dataset

Metatext is a powerful no-code tool for train, tune and integrate custom NLP models

➡️  Learn more

Paper

Read full original WikiSplit paper.

Download PDF paper


Classify and extract text 10x better and faster 🦾

Metatext helps you to classify and extract information from text and documents with customized language models with your data and expertise.