Classify and extract text 10x better and faster 🦾


➡️  Learn more

WikiText-103 & 2 Dataset

Created by Merity et al. at 2016, the WikiText-103 & 2 Dataset contains word and character level tokens extracted from Wikipedia, in English language. Containing 100M+ in TOKENS file format.

Dataset Sources

Here you can download the WikiText-103 & 2 dataset in TOKENS format.

Download WikiText-103 & 2 dataset TOKENS files

Fine-tune with WikiText-103 & 2 dataset

Metatext is a powerful no-code tool for train, tune and integrate custom NLP models

➡️  Learn more

Paper

Read full original WikiText-103 & 2 paper.

Download PDF paper


Classify and extract text 10x better and faster 🦾

Metatext helps you to classify and extract information from text and documents with customized language models with your data and expertise.