Tokenizer
A tokenizer with a vocab size of 10k for Intro to Deep Learning Homework 4 on Language Modelling and Automatic Speech Recognition.
The tokenizer was trained on LibriSpeech LM text
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.