WikiMatrix Dataset
Created by Schwenk et al. at 2019, the WikiMatrix Dataset contains 135 million parallel sentences for 1,620 different language pairs in 85 different languages., in Multi-Lingual language. Containing 135M in TSV file format.
Dataset Sources
Here you can download the WikiMatrix dataset in TSV format.
Download WikiMatrix dataset TSV files
Fine-tune with WikiMatrix dataset
Metatext is a powerful no-code tool for train, tune and integrate custom NLP models
Paper
Read full original WikiMatrix paper.
Classify and extract text 10x better and faster 🦾
Metatext helps you to classify and extract information from text and documents with customized language models with your data and expertise.