This is SinBERT-large model. SinBERT models are pretrained on a large Sinhala monolingual corpus (sin-cc-15M) using RoBERTa. If you use this model, please cite BERTifying Sinhala - A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification, LREC 2022

Downloads last month
441
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Spaces using NLPC-UOM/SinBERT-large 4