MarinbadGPT / README.md
Snowad's picture
Update README.md
960455b verified
metadata
license: apache-2.0

MarinbadGPT

MarinbadGPT is a language model based on HuggingFaceTB's SmolLM-135M architecture, finely trained on a corpus of Marinbad games. The aim of this model is to generate games of Marinbad and play them against a human player.

Model Training

The training of MarinbadGPT was conducted on a high-performance computing infrastructure utilizing NVIDIA H100 GPUs, renowned for their power in the field of deep learning.

Training Configuration:

  • Infrastructure: 2x NVIDIA H100 (80GB HBM3)
  • Duration: 1 hour
  • Optimizer: AdamW
  • Learning Rate: 3e-4
  • Batch Size: Micro batch size of 128, with gradient accumulation steps of 8, resulting in an effective batch size of 1024
  • Warmup Steps: 100