Visual Question Answering
Transformers
Safetensors
llava
image-text-to-text
AIGC
LLaVA
Inference Endpoints
ponytail commited on
Commit
a112ff5
·
verified ·
1 Parent(s): a70b13c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -37,7 +37,7 @@ human-llava has a good performance in both general and special fields
37
  ## News and Update 🔥🔥🔥
38
  * Oct.23, 2024. **🤗[HumanCaption-HQ-311K](https://huggingface.co/datasets/OpenFace-CQUPT/HumanCaption-HQ-311K), is released!👏👏👏**
39
  * Sep.12, 2024. **🤗[HumanCaption-10M](https://huggingface.co/datasets/OpenFace-CQUPT/HumanCaption-10M), is released!👏👏👏**
40
- * Sep.8, 2024. **🤗[HumanLLaVA-llama-3-8B](https://huggingface.co/OpenFace-CQUPT/Human_LLaVA), is released!👏👏👏**
41
 
42
 
43
 
 
37
  ## News and Update 🔥🔥🔥
38
  * Oct.23, 2024. **🤗[HumanCaption-HQ-311K](https://huggingface.co/datasets/OpenFace-CQUPT/HumanCaption-HQ-311K), is released!👏👏👏**
39
  * Sep.12, 2024. **🤗[HumanCaption-10M](https://huggingface.co/datasets/OpenFace-CQUPT/HumanCaption-10M), is released!👏👏👏**
40
+ * Sep.8, 2024. **🤗[HumanVLM](https://huggingface.co/OpenFace-CQUPT/Human_LLaVA), is released!👏👏👏**
41
 
42
 
43