Introduction to Metatext platform

The Metatext platform allows you to build language models within your product without the need to write code. Our language engines can solve a wide range of natural language use cases such as: customer service, user feedback, content moderation, spam detection, etc.

As an SaaS and no-code platform, it allows you to reduce infrastructure costs and ML teams, leaving the heavy lifting to our APIs and interfaces, your team can build and manage the entire lifecycle of an AI model, from prototyping to production. Performing tasks from dataset creation and annotating, training models and integrating with your workflows. All models can be accessed via Playground or API.

The Metatext team is made up of researchers and AI engineers who aim to facilitate the adoption of new technologies for different audiences, including developers, analysts in general and even data scientists.

AI and NLP basic concepts

Artificial Intelligence - AI are systems capable of learning patterns through data. These systems make it possible to analyze and make decisions on a large scale, often complex analyses, which would become humanly impossible. The adoption of AI in large companies was due to the productivity gain with the use of these technologies. AI is capable of working and performing various tasks, labeling information (classification), grouping information (grouping), summarizing documents (summarization), estimating a value (regression), etc. And for all these tasks we can separate into areas of AI based on the type of data used, computer vision models are systems capable of interpreting and generating images, and work with image data, often in image or video format, already processing of natural language are systems capable of reading and writing, they work with language data, often in text format, there are also traditional models, such as credit score for example, that work with tabular data, as in the format of tables with data democratic and socioeconomic.

Artificial Intelligence vs Machine learning vs Deep Learning

Within the AI spectrum, in addition to tasks such as classification, regression, grouping, summarization… we have some important terms to clarify, machine learning refers to the algorithms responsible for learning patterns through data, employing statistics and mathematical equations and diverse data structures to make this learning efficient. Examples are Decision Tree algorithms, Logistic Regression, or Neural Networks. Another term is deep learning, a series of algorithms based on neural networks optimized for extracting patterns from large volumes of data that employ various architectures and techniques that mimic the human brain, these neural networks that are extremely efficient in extracting patterns from unstructured data such as images, text and audio. Many of the language models are deep learning models. Some of these models use the Transformer architecture, which is the basis of several language models we've seen lately. BERT, GPT3 and other recently released LLM.

Docs 01

Large Language Models

About large language models (LMM), we can go deeper and list some tasks that they are able to solve, as we have already talked about classification, summarization, there are still tasks like entity recognition (NER), Q&A and generation (completion). Another benefit of LLM is that we can apply transfer learning to obtain a base model and adjust it according to the data and domain. Taking advantage of the model's learning base and teaching it with its own data to perform a certain task, this makes the model much more expert in a given task, we call this process fine-tunning. And you can choose models that perform well for one language or another, or one task or another.

MLOps for Large Language Models

Despite all these benefits and recent advances, such as GPT and ChatGPT and other models, it is still complex to implement customized models in your own environments, requiring infrastructure, high costs and specialized knowledge in MLOps. Metatext targets this scenario in order to deliver a platform with all the headaches of implementing, training and producing an LLM model, with a no-code approach you can work with classification models and adjust (fine-tunne) using other LLMs, without having to programming or setting up an environment for this, both training and deployment.

Concept Drift

There are some implications when using AI, development is different from traditional software, and maintenance is also different. Although learning is efficient and you have obtained a good performance with a given dataset, AI models become “outdated” “outdated” quickly, this is because it needs to be constantly being fed with new data to learn new patterns. We call it a concept dript or model dript when the model does not perform as well in the real world, after a while of being used, and its performance declines, this requires a constant model monitoring process, to identify the ideal model to update the model with new data and improve your learning.

Check out the Terms of service and Privacy policy.