RoBERTaLexPT: A Legal RoBERTa Model pretrained with deduplication for Portuguese

  • Eduardo Garcia Universidade Federal de Goiás
  • Nádia Félix Felipe da Silva UNIVERSIDADE FEDERAL DE GOIÁS
  • Juliana Gomes Universidade Federal de Goiás
  • Hidelberg Albuquerque Universidade Federal Rural de Pernambuco
  • Ellen Souza Universidade Federal Rural de Pernambuco
  • Felipe Siqueira Universidade de São Paulo
  • Eliomar Lima Universidade Federal de Goiás
  • André Carvalho Universidade de São Paulo
Keywords: language model, legal domain, benchmark

Abstract

This work investigates the application of Natural Language Processing (NLP) in the legal context for the Portuguese language, emphasizing the importance of adapting pre-trained models, such as RoBERTa, from specialized corpora in the legal domain. We compiled and pre-processed a Portuguese Legal corpus, LegalPT corpus, addressing challenges of high document duplication in legal corpora, and measuring the impact of hyperparameters and embedding initialization. Experiments revealed that pre-training on legal and general data resulted in more effective models for legal tasks, with RoBERTaLexPT outperforming larger models trained on generic corpora, and other legal models from related works. We also aggregated a legal benchmark, PortuLex benchmark. This study contributes to improving NLP solutions in the Brazilian legal context, providing enhanced models, a specialized corpus, and a benchmark dataset. For reproducibility, we will make related code, data, and models available.

Published
2024-12-31
How to Cite
Garcia, E., da Silva, N. F. F., Gomes, J., Albuquerque, H., Souza, E., Siqueira, F., Lima, E., & Carvalho, A. (2024). RoBERTaLexPT: A Legal RoBERTa Model pretrained with deduplication for Portuguese. Linguamática, 16(2), 183-200. https://doi.org/10.21814/lm.16.2.457
Section
PROPOR 2024 | Invited Articles