Classify and extract text 10x better and faster 🦾


➡️  Learn more

jplu tf xlm roberta-base model

🤗 Huggingface jplu/tf-xlm-roberta-base

The model jplu tf xlm roberta-base is a Natural Language Processing (NLP) Model implemented in Transformer library, generally using the Python programming language.

What is the jplu tf xlm roberta-base model?

XLM-RoBERTa is a scaled cross lingual sentence encoder . XLM-R achieves state-of-the-arts results on multiple cross-lingual benchmarks . All models are available on the Huggingface model hub for Tensorflow . The model is trained on 2.5T of data across 100 languages data filtered from Common Crawl . The models can be loaded like: TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-base") or TF_model.h5/TF-model.json • TF_Model.html • TF-Model.hs/tf_model,

Fine-tune jplu tf-xlm-roberta-base models

Metatext is a powerful no-code tool for train, tune and integrate custom NLP models

➡️  Learn more

Model usage

You can find jplu tf xlm roberta-base model easily in transformers python library. To download and use any of the pretrained models on your given task, you just need to use those a few lines of codes (PyTorch version). Here an example to download using pip (a package installer for Python)

Download and install using pip

$ pip install transformers

Usage in python

# Import generic wrappers
from transformers import AutoModel, AutoTokenizer 


# Define the model repo
model_name = "jplu/tf-xlm-roberta-base" 


# Download pytorch model
model = AutoModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)


# Transform input tokens 
inputs = tokenizer("Hello world!", return_tensors="pt")

# Model apply
outputs = model(**inputs)
    

More info about jplu tf-xlm-roberta-base

See the paper, download and more info


Classify and extract text 10x better and faster 🦾

Metatext helps you to classify and extract information from text and documents with customized language models with your data and expertise.