Add pipeline tag, library name and paper abstract
Browse filesThis PR adds the `pipeline_tag` and `library_name` to the model card metadata. This will improve discoverability of the model and clarify its compatibility with the HF中国镜像站 Diffusers library.
It also incorporates the paper abstract to provide more context.
README.md
CHANGED
@@ -1,18 +1,23 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
library_name: diffusers
|
4 |
+
pipeline_tag: text-to-image
|
5 |
+
---
|
6 |
+
|
7 |
+
# LightGen: Efficient Image Generation through Knowledge Distillation and Direct Preference Optimization
|
8 |
+
|
9 |
+
<p align="center">
|
10 |
+
<img src="https://github.com/XianfengWu01/LightGen/blob/main/demo/demo.png" width="720">
|
11 |
+
</p>
|
12 |
+
|
13 |
+
## About
|
14 |
+
|
15 |
+
This model (LightGen) introduces a novel pre-train pipeline for text-to-image models. It uses knowledge distillation (KD) and Direct Preference Optimization (DPO) to achieve efficient image generation. Drawing inspiration from data KD techniques, LightGen distills knowledge from state-of-the-art text-to-image models into a compact Masked Autoregressive (MAR) architecture with only $0.7B$ parameters.
|
16 |
+
|
17 |
+
It is based on [this paper](https://arxiv.org/abs/2503.08619), code release on [this github repo](https://github.com/XianfengWu01/LightGen).
|
18 |
+
|
19 |
+
Currently, we just release some checkpoint without DPO
|
20 |
+
|
21 |
+
## 🦉 ToDo List
|
22 |
+
|
23 |
+
- [ ] Release Complete Checkpoint.
|