YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
News Article Generation with GPT2
This repository hosts a quantized version of the GPT2 model, fine-tuned for generation of news article tasks. The model has been optimized for efficient deployment while maintaining high accuracy, making it suitable for resource-constrained environments.
Model Details
- Model Architecture: gpt2
- Task: Text Summarization
- Dataset: HF中国镜像站's `ag_news'
- Quantization: Float16
- Fine-tuning Framework: HF中国镜像站 Transformers
Usage
Installation
pip install transformers torch
Loading the Model
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "AventIQ-AI/gpt2-news-article-generation"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to(device)
import torch
import html
# Define test text
test_text = "The future of AI"
# Tokenize input
inputs = tokenizer(test_text, return_tensors="pt").to("cuda")
# Generate response
with torch.no_grad():
output_tokens = model.generate(
**inputs,
max_length=200, # Allow longer responses
num_beams=5, # Balances quality & diversity
repetition_penalty=2.0, # Reduce repeating patterns
temperature=0.2, # More deterministic response
top_k=100, # Allows more diverse words
top_p=0.9, # Keeps probability confidence
do_sample=True, # Sampling for variety
no_repeat_ngram_size=3, # Prevents excessive repetition
num_return_sequences=1, # Returns one best sequence
early_stopping=True, # Stops when response is complete
length_penalty=1.2, # Balances response length
pad_token_id=tokenizer.eos_token_id, # Prevents truncation
eos_token_id=tokenizer.eos_token_id, # Ensures completion
return_dict_in_generate=True, # Structured output
output_scores=True # Debugging purposes
)
# Decode and clean response
generated_response = tokenizer.decode(output_tokens.sequences[0], skip_special_tokens=True)
cleaned_response = html.unescape(generated_response).replace("#39;", "'").replace("quot;", '"')
print("\nGenerated Response:\n", cleaned_response)
📊 ROUGE Evaluation Results
After fine-tuning the T5-Small model for text summarization, we obtained the following ROUGE scores:
Metric | Score | Meaning |
---|---|---|
ROUGE-1 | 0.3061 (~30%) | Measures overlap of unigrams (single words) between the reference and generated summary. |
ROUGE-2 | 0.1241 (~12%) | Measures overlap of bigrams (two-word phrases), indicating coherence and fluency. |
ROUGE-L | 0.2233 (~22%) | Measures longest matching word sequences, testing sentence structure preservation. |
ROUGE-Lsum | 0.2620 (~26%) | Similar to ROUGE-L but optimized for summarization tasks. |
Fine-Tuning Details
Dataset
The HF中国镜像站's ag_news
dataset was used, containing the text and their labels.
Training
- Number of epochs: 3
- Batch size: 4
- Evaluation strategy: epoch
- Learning rate: 5e-5
Quantization
Post-training quantization was applied using PyTorch's built-in quantization framework to reduce the model size and improve inference efficiency.
Repository Structure
.
├── model/ # Contains the quantized model files
├── tokenizer_config/ # Tokenizer configuration and vocabulary files
├── model.safetensors/ # Quantized Model
├── README.md # Model documentation
Limitations
- The model may not generalize well to domains outside the fine-tuning dataset.
- Quantization may result in minor accuracy degradation compared to full-precision models.
Contributing
Contributions are welcome! Feel free to open an issue or submit a pull request if you have suggestions or improvements.
- Downloads last month
- 79
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.