NEJM-enzh Dataset
Created by Liu et al. at 2020, the NEJM-enzh Dataset is an English-Chinese parallel corpus, consisting of about 100,000 sentence pairs and 3,000,000 tokens on each side, from the New England Journal of Medicine (NEJM)., in Chinese, English language. Containing 100 in n/a file format.
Dataset Sources
Here you can download the NEJM-enzh dataset in n/a format.
Download NEJM-enzh dataset n/a files
Fine-tune with NEJM-enzh dataset
Metatext is a powerful no-code tool for train, tune and integrate custom NLP models
Paper
Read full original NEJM-enzh paper.
Classify and extract text 10x better and faster 🦾
Metatext helps you to classify and extract information from text and documents with customized language models with your data and expertise.