Inference v1
Making requests
By using the Inference API, it calls your NLP model that was deployed automatically after the training process. Here you will find the configuration to use them.
Here is an example for the Inference API using python:
import requests
model_id = "YOUR_MODEL_ID"
api_key = "YOUR_API_KEY"
url = f"https://api.metatext.ai/v1/inference/{model_id}"
payload = { "text": "hello world" }
headers = { "content-type": "application/json", "x-api-key": api_key }
response = requests.post(url=url, json=payload, headers=headers)
print(response.json())
# output: {
"prediction": "acquisition_credit_card",
"confidence": 0.8284599730460649}
}
Path parameters
model_id string | Your model ID (alphanumeric format up to 64 characters) |
Payload
text string required | Your text |
refresh boolean optional | Refresh your model cache |
Response
The API returns an string for prediction and a numbers for confidence.
prediction string | The prediction for the provided text |
confidence number | The confidence value for the predicted label |