swayambhooj commited on
Commit
2612669
·
verified ·
1 Parent(s): 5b34d4a

Upload folder using huggingface_hub

Browse files
Files changed (29) hide show
  1. .gitattributes +14 -0
  2. eval_datasets/alpaca_70b.jsonl +0 -0
  3. eval_datasets/gsm8k_70b.jsonl +0 -0
  4. expert_responses/Llama_3.1_70B_Instruct/responses_full.jsonl +3 -0
  5. expert_responses/Llama_3.1_70B_Instruct/responses_full_job_report.yaml +10 -0
  6. expert_responses/Llama_3.1_8B_Instruct/responses_full.jsonl +3 -0
  7. expert_responses/Llama_3.1_8B_Instruct/responses_full_job_report.yaml +10 -0
  8. expert_responses/Llama_3.2_1B_Instruct/responses_full.jsonl +3 -0
  9. expert_responses/Llama_3.2_1B_Instruct/responses_full_job_report.yaml +10 -0
  10. expert_responses/Llama_3.2_3B_Instruct/responses_full.jsonl +3 -0
  11. expert_responses/Llama_3.2_3B_Instruct/responses_full_job_report.yaml +10 -0
  12. expert_responses/Qwen2.5_72B_Instruct_/responses_full.jsonl +3 -0
  13. expert_responses/Qwen2.5_72B_Instruct_/responses_full_job_report.yaml +10 -0
  14. expert_responses/Qwen2.5_Coder_32B_Instruct/responses_full.jsonl +3 -0
  15. expert_responses/Qwen2.5_Coder_32B_Instruct/responses_full_job_report.yaml +10 -0
  16. expert_responses/Qwen2.5_Math_1.5B_Instruct/responses_full.jsonl +3 -0
  17. expert_responses/Qwen2.5_Math_1.5B_Instruct/responses_full_job_report.yaml +10 -0
  18. expert_responses/Qwen2.5_Math_72B_Instruct_/responses_full.jsonl +3 -0
  19. expert_responses/Qwen2.5_Math_72B_Instruct_/responses_full_job_report.yaml +10 -0
  20. lmsys-chatbot-arena/expert_responses/Llama_3.1_70B_Instruct/responses_full.jsonl +3 -0
  21. lmsys-chatbot-arena/expert_responses/Llama_3.1_70B_Instruct/responses_full_job_report.yaml +11 -0
  22. lmsys-chatbot-arena/lmsys-chatbot-arena.zip +3 -0
  23. lmsys-chatbot-arena/sample_submission.csv +4 -0
  24. lmsys-chatbot-arena/test.csv +4 -0
  25. lmsys-chatbot-arena/train.csv +3 -0
  26. lmsys-chatbot-arena/train_data_with_categories.jsonl +3 -0
  27. test.jsonl +3 -0
  28. train.jsonl +3 -0
  29. val.jsonl +3 -0
.gitattributes CHANGED
@@ -33,3 +33,17 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ expert_responses/Llama_3.1_70B_Instruct/responses_full.jsonl filter=lfs diff=lfs merge=lfs -text
37
+ expert_responses/Llama_3.1_8B_Instruct/responses_full.jsonl filter=lfs diff=lfs merge=lfs -text
38
+ expert_responses/Llama_3.2_1B_Instruct/responses_full.jsonl filter=lfs diff=lfs merge=lfs -text
39
+ expert_responses/Llama_3.2_3B_Instruct/responses_full.jsonl filter=lfs diff=lfs merge=lfs -text
40
+ expert_responses/Qwen2.5_72B_Instruct_/responses_full.jsonl filter=lfs diff=lfs merge=lfs -text
41
+ expert_responses/Qwen2.5_Coder_32B_Instruct/responses_full.jsonl filter=lfs diff=lfs merge=lfs -text
42
+ expert_responses/Qwen2.5_Math_1.5B_Instruct/responses_full.jsonl filter=lfs diff=lfs merge=lfs -text
43
+ expert_responses/Qwen2.5_Math_72B_Instruct_/responses_full.jsonl filter=lfs diff=lfs merge=lfs -text
44
+ lmsys-chatbot-arena/expert_responses/Llama_3.1_70B_Instruct/responses_full.jsonl filter=lfs diff=lfs merge=lfs -text
45
+ lmsys-chatbot-arena/train.csv filter=lfs diff=lfs merge=lfs -text
46
+ lmsys-chatbot-arena/train_data_with_categories.jsonl filter=lfs diff=lfs merge=lfs -text
47
+ test.jsonl filter=lfs diff=lfs merge=lfs -text
48
+ train.jsonl filter=lfs diff=lfs merge=lfs -text
49
+ val.jsonl filter=lfs diff=lfs merge=lfs -text
eval_datasets/alpaca_70b.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
eval_datasets/gsm8k_70b.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
expert_responses/Llama_3.1_70B_Instruct/responses_full.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9f3f7925a54faf4d9fd9367a60611eee05eb4fe909125eed74dddc3852b42097
3
+ size 203131880
expert_responses/Llama_3.1_70B_Instruct/responses_full_job_report.yaml ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ ckpt_dir: /mnt/CoE_v2/cache/meta-llama/Meta-Llama-3.1-70B-Instruct
2
+ dataset: /mnt/CoE_v2/artifacts/router_training_data_09_07_2024_with_splits.jsonl
3
+ expert_name: Llama-3.1-70B-Instruct
4
+ file_name: responses_full
5
+ max_model_len: 8192
6
+ max_tokens: 1000
7
+ n_completed: 58303
8
+ n_samples: 58303
9
+ output_folder: /mnt/CoE_Speculative/datasets/expert_responses
10
+ tp: 8
expert_responses/Llama_3.1_8B_Instruct/responses_full.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1942772569ecb18c0c99414d000522f2d3212c90dd117f2257b6f8959807cbce
3
+ size 210877995
expert_responses/Llama_3.1_8B_Instruct/responses_full_job_report.yaml ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ ckpt_dir: /mnt/CoE_v2/cache/meta-llama/Llama-3.1-8B-Instruct/
2
+ dataset: /mnt/CoE_v2/artifacts/router_training_data_09_07_2024_with_splits.jsonl
3
+ expert_name: Llama-3.1-8B-Instruct
4
+ file_name: responses_full
5
+ max_model_len: 8192
6
+ max_tokens: 1000
7
+ n_completed: 58303
8
+ n_samples: 58303
9
+ output_folder: /mnt/CoE_Speculative/datasets/expert_responses
10
+ tp: 4
expert_responses/Llama_3.2_1B_Instruct/responses_full.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4c40b1104da3bab4c88d896650e2d6f407ed796ce6b96e8a9f87896a302f2fa
3
+ size 213753286
expert_responses/Llama_3.2_1B_Instruct/responses_full_job_report.yaml ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ ckpt_dir: /mnt/CoE_v2/cache/meta-llama/Llama-3.2-1B-Instruct
2
+ dataset: /mnt/CoE_v2/artifacts/router_training_data_09_07_2024_with_splits.jsonl
3
+ expert_name: Llama-3.2-1B-Instruct
4
+ file_name: responses_full
5
+ max_model_len: 8192
6
+ max_tokens: 1000
7
+ n_completed: 58303
8
+ n_samples: 58303
9
+ output_folder: /mnt/CoE_Speculative/datasets/expert_responses
10
+ tp: 1
expert_responses/Llama_3.2_3B_Instruct/responses_full.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9f4c3a7f993a1e48650652d8565fac9aab6b35fab60d323350610cfbee331d2
3
+ size 215817307
expert_responses/Llama_3.2_3B_Instruct/responses_full_job_report.yaml ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ ckpt_dir: /mnt/CoE_v2/cache/meta-llama/Llama-3.2-3B-Instruct/
2
+ dataset: /mnt/CoE_v2/artifacts/router_training_data_09_07_2024_with_splits.jsonl
3
+ expert_name: Llama-3.2-3B-Instruct
4
+ file_name: responses_full
5
+ max_model_len: 8192
6
+ max_tokens: 1000
7
+ n_completed: 58303
8
+ n_samples: 58303
9
+ output_folder: /mnt/CoE_Speculative/datasets/expert_responses
10
+ tp: 4
expert_responses/Qwen2.5_72B_Instruct_/responses_full.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f9f5aa9054c44a46b363d0d2221ef69ff4f7b3642bf6ff7e4ae9a69cce52bee3
3
+ size 230334989
expert_responses/Qwen2.5_72B_Instruct_/responses_full_job_report.yaml ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ ckpt_dir: /mnt/CoE_v2/cache/Qwen/Qwen2.5-72B-Instruct/
2
+ dataset: /mnt/CoE_v2/artifacts/router_training_data_09_07_2024_with_splits.jsonl
3
+ expert_name: Qwen2.5-72B-Instruct/
4
+ file_name: responses_full
5
+ max_model_len: 8192
6
+ max_tokens: 1000
7
+ n_completed: 58303
8
+ n_samples: 58303
9
+ output_folder: /mnt/CoE_Speculative/datasets/expert_responses
10
+ tp: 8
expert_responses/Qwen2.5_Coder_32B_Instruct/responses_full.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12e82b4dd6220023c0f4af39cc8847a18494be73b6fa5d1a63121377851e360e
3
+ size 211618138
expert_responses/Qwen2.5_Coder_32B_Instruct/responses_full_job_report.yaml ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ ckpt_dir: /mnt/CoE_v2/cache/Qwen/Qwen2.5-Coder-32B-Instruct/
2
+ dataset: /mnt/CoE_v2/artifacts/router_training_data_09_07_2024_with_splits.jsonl
3
+ expert_name: Qwen2.5-Coder-32B-Instruct
4
+ file_name: responses_full
5
+ max_model_len: 8192
6
+ max_tokens: 1000
7
+ n_completed: 58303
8
+ n_samples: 58303
9
+ output_folder: /mnt/CoE_Speculative/datasets/expert_responses
10
+ tp: 4
expert_responses/Qwen2.5_Math_1.5B_Instruct/responses_full.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4a66d4624029f986fde0f7fb94ac89076645c623ab85947d3ebd36139767d30
3
+ size 203716510
expert_responses/Qwen2.5_Math_1.5B_Instruct/responses_full_job_report.yaml ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ ckpt_dir: /mnt/CoE_v2/cache/Qwen/Qwen2.5-Math-1.5B-Instruct
2
+ dataset: /mnt/CoE_v2/artifacts/router_training_data_09_07_2024_with_splits.jsonl
3
+ expert_name: Qwen2.5-Math-1.5B-Instruct
4
+ file_name: responses_full
5
+ max_model_len: 4096
6
+ max_tokens: 1000
7
+ n_completed: 58303
8
+ n_samples: 58303
9
+ output_folder: /mnt/CoE_Speculative/datasets/expert_responses
10
+ tp: 1
expert_responses/Qwen2.5_Math_72B_Instruct_/responses_full.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06d21152329794edb9aea567ff7a93c73741974a98b8b340eb669b108db86bd1
3
+ size 199051852
expert_responses/Qwen2.5_Math_72B_Instruct_/responses_full_job_report.yaml ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ ckpt_dir: /mnt/CoE_v2/cache/Qwen/Qwen2.5-Math-72B-Instruct/
2
+ dataset: /mnt/CoE_v2/artifacts/router_training_data_09_07_2024_with_splits.jsonl
3
+ expert_name: Qwen2.5-Math-72B-Instruct/
4
+ file_name: responses_full
5
+ max_model_len: 4096
6
+ max_tokens: 1000
7
+ n_completed: 58303
8
+ n_samples: 58303
9
+ output_folder: /mnt/CoE_Speculative/datasets/expert_responses
10
+ tp: 8
lmsys-chatbot-arena/expert_responses/Llama_3.1_70B_Instruct/responses_full.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88d58ed5a8a39ed0f78dc3eba7a6c97aaab234dfccde1745f15433a86afc0988
3
+ size 132499279
lmsys-chatbot-arena/expert_responses/Llama_3.1_70B_Instruct/responses_full_job_report.yaml ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ckpt_dir: /mnt/CoE_v2/cache/meta-llama/Meta-Llama-3.1-70B-Instruct
2
+ dataset: /mnt/CoE_Speculative/datasets/lmsys-chatbot-arena/train_data_with_categories.jsonl
3
+ dp: null
4
+ expert_name: Llama-3.1-70B-Instruct
5
+ file_name: responses_full
6
+ max_model_len: 8192
7
+ max_tokens: 2000
8
+ n_completed: 57477
9
+ n_samples: 57477
10
+ output_folder: /mnt/CoE_Speculative/datasets/lmsys-chatbot-arena/expert_responses/
11
+ tp: 8
lmsys-chatbot-arena/lmsys-chatbot-arena.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c82ab0123d954679af8bf448af87dd30f7dbda633e4064bd2bfbf5ad95530c21
3
+ size 59810797
lmsys-chatbot-arena/sample_submission.csv ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ id,winner_model_a,winner_model_b,winner_tie
2
+ 136060,0.3333333333333333,0.3333333333333333,0.3333333333333333
3
+ 211333,0.3333333333333333,0.3333333333333333,0.3333333333333333
4
+ 1233961,0.3333333333333333,0.3333333333333333,0.3333333333333333
lmsys-chatbot-arena/test.csv ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ id,prompt,response_a,response_b
2
+ 136060,"[""I have three oranges today, I ate an orange yesterday. How many oranges do I have?""]","[""You have two oranges today.""]","[""You still have three oranges. Eating an orange yesterday does not affect the number of oranges you have today.""]"
3
+ 211333,"[""You are a mediator in a heated political debate between two opposing parties. Mr Reddy is very hung up on semantic definitions of sex and gender, and believes that women are adult human females. Meanwhile Ms Blue is extremely fluid with definitions and does not care about truth. He (Ms blue uses he\/him pronouns) insists that anybody can be any gender, gametes don't mean anything, and that men can get pregnant. You, Mr Goddy are tasked with helping them both find a middle ground.""]","[""Thank you for sharing the details of the situation. As a mediator, I understand the importance of finding a middle ground that both parties can agree upon.\n\nIt is important to recognize that language and definitions can be complex and can vary depending on cultural and societal norms. At the same time, it is also important to acknowledge the biological differences between males and females.\n\nOne possible approach to finding a middle ground could be to focus on respecting individuals' self-identification and autonomy while also acknowledging the biological differences between males and females. This could involve using inclusive language that reflects individuals' gender identities while also recognizing the importance of biological sex in certain contexts, such as medical treatment and reproductive health.\n\nIt may also be helpful to explore ways to create a more inclusive society where individuals of all gender identities are valued and respected. This could involve promoting education and awareness about gender diversity and creating policies and practices that support individuals' gender identities and expression.\n\nUltimately, the goal of mediation is to find a solution that meets the needs and concerns of both parties. By focusing on mutual respect, open communication, and a willingness to find common ground, it is possible to reach a resolution that allows both parties to move forward in a positive and productive way.""]","[""Mr Reddy and Ms Blue both have valid points in their arguments. On one hand, Mr Reddy is correct that women are adult human females, as defined by their biological and physical characteristics. However, Ms Blue is right that gender is a more fluid concept and that people can identify with any gender they choose. A compromise could be that gender is defined by an individual's self-identification, while sex is defined by their biological characteristics.""]"
4
+ 1233961,"[""How to initialize the classification head when I do transfer learning. For example, I have a pre-trained vision transformer on ImageNet, and now I want to finetune it on StanfordCars"",""I want to do full finetuning""]","[""When you want to initialize the classification head for transfer learning, you can follow these steps:\n\n 1. Load the pre-trained model: First, load the pre-trained vision transformer model on ImageNet. You can use popular deep learning libraries like PyTorch or TensorFlow to load the model.\n 2. Freeze the pre-trained layers: Next, you need to freeze the pre-trained layers of the model so that they don't get updated during the training process. This is because the pre-trained layers have learned useful features that can be reused for the new task.\n 3. Add a new classification head: Now, you need to add a new classification head to the model. The classification head is responsible for outputting the predicted class for a given input image. The size of the classification head depends on the number of classes in the new task. For example, if you are working on the StanfordCars dataset, which has 196 classes, you would need a classification head with 196 output neurons.\n 4. Initialize the classification head: Initialize the weights of the classification head randomly or using pre-trained weights. This is important because the initial weights can have a significant impact on the performance of the model.\n 5. Train the model: Finally, you can train the model on the new task using your labeled data. During training, only the weights of the classification head will be updated, while the weights of the frozen pre-trained layers will remain unchanged. This allows the model to learn task-specific features while leveraging the pre-trained features.\n\nNote that there are different ways to initialize the classification head, such as using Xavier initialization, Kaiming initialization, or He initialization. You can choose the initialization method that works best for your specific task and model architecture."",""If you want to do full finetuning of the pre-trained vision transformer on the StanfordCars dataset, you can follow these steps:\n\n 1. Load the pre-trained model: Load the pre-trained vision transformer model on ImageNet. You can use popular deep learning libraries like PyTorch or TensorFlow to load the model.\n 2. Remove the last classification layer: The pre-trained model typically has a last classification layer that is specific to the ImageNet dataset. To adapt the model to the StanfordCars dataset, you need to remove this layer.\n 3. Add a new classification head: Add a new classification head to the model. The classification head is responsible for outputting the predicted class for a given input image. The size of the classification head depends on the number of classes in the StanfordCars dataset, which has 196 classes. You can add a fully connected layer with 196 output neurons as the new classification head.\n 4. Initialize the classification head: Initialize the weights of the classification head randomly or using pre-trained weights. This is important because the initial weights can have a significant impact on the performance of the model.\n 5. Freeze the pre-trained layers: Freeze the pre-trained layers of the model so that they don't get updated during the training process. This is because the pre-trained layers have learned useful features that can be reused for the new task.\n 6. Train the model: Finally, you can train the model on the StanfordCars dataset using your labeled data. During training, all the weights of the model will be updated, including the weights of the frozen pre-trained layers. This allows the model to learn task-specific features while leveraging the pre-trained features.\n\nNote that when you do full finetuning, you need to use a lower learning rate and a smaller number of training epochs compared to transfer learning, because the model has already learned a lot of useful features from the pre-trained weights. You can experiment with different learning rates and training schedules to find the best hyperparameters for your specific task and model architecture.""]","[""To initialize the classification head when performing transfer learning, follow these steps:\n\n1. Load the pre-trained vision transformer model that was trained on ImageNet.\n2. Replace the existing classification head with a new one suitable for the StanfordCars dataset.\n3. Initialize the weights of the new classification head with random values or pre-trained weights if available.\n4. Freeze the weights of the vision transformer backbone to prevent them from being updated during training.\n5. Train the model on the StanfordCars dataset, fine-tuning the classification head while keeping the backbone frozen.\n\nHere's an example implementation in Python using PyTorch:\n\n```python\nimport torch\nimport torch.nn as nn\nimport torchvision.models as models\n\n# Step 1: Load pre-trained vision transformer model\nvision_transformer = models.vit_base_patch16_224(pretrained=True)\n\n# Step 2: Replace classification head\nnum_classes = 196 # Number of classes in StanfordCars dataset\nvision_transformer.head = nn.Linear(vision_transformer.head.in_features, num_classes)\n\n# Step 3: Initialize weights of new classification head\nnn.init.xavier_uniform_(vision_transformer.head.weight)\nnn.init.constant_(vision_transformer.head.bias, 0)\n\n# Step 4: Freeze the weights of the backbone\nfor param in vision_transformer.parameters():\n param.requires_grad = False\n\n# Step 5: Train the model on StanfordCars dataset\n# ...\n```\n\nIn this example, we load the pre-trained vision transformer model using `torchvision.models.vit_base_patch16_224`. We then replace the `head` (classification head) of the model with a new linear layer (`nn.Linear`) that has the appropriate number of output classes for the StanfordCars dataset.\n\nAfter replacing the classification head, we initialize its weights using Xavier initialization (`nn.init.xavier_uniform_`) and set its biases to zero (`nn.init.constant_`).\n\nFinally, we freeze the weights of the vision transformer backbone by setting `requires_grad=False` for all parameters. This ensures that only the weights of the classification head will be updated during training on the StanfordCars dataset."",""If you want to perform full finetuning, where both the classification head and the backbone of the vision transformer are updated during training on the StanfordCars dataset, you can omit freezing the backbone weights. Here's an updated version of the code:\n\n```python\nimport torch\nimport torch.nn as nn\nimport torchvision.models as models\n\n# Step 1: Load pre-trained vision transformer model\nvision_transformer = models.vit_base_patch16_224(pretrained=True)\n\n# Step 2: Replace classification head\nnum_classes = 196 # Number of classes in StanfordCars dataset\nvision_transformer.head = nn.Linear(vision_transformer.head.in_features, num_classes)\n\n# Step 3: Initialize weights of new classification head\nnn.init.xavier_uniform_(vision_transformer.head.weight)\nnn.init.constant_(vision_transformer.head.bias, 0)\n\n# Step 4: Train the model on StanfordCars dataset\n# ...\n```\n\nIn this case, you can directly train the vision transformer model on the StanfordCars dataset without freezing any weights. During training, both the weights of the classification head and the backbone will be updated.\n\nKeep in mind that full finetuning may require more computational resources and a larger labeled dataset compared to freezing the backbone. It can help the model learn more specific features related to the StanfordCars dataset but may also increase the risk of overfitting if the dataset is small. Monitoring the training progress and adjusting hyperparameters, such as learning rate and regularization, can help mitigate overfitting.""]"
lmsys-chatbot-arena/train.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0692154aaf20fc6649090c3f49b6b5dd1e693a765ecb5f0856ae0da5946f5be2
3
+ size 184175071
lmsys-chatbot-arena/train_data_with_categories.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9edc14c1d3ed2ae635921b72a59b339c8423cfc378ce1d4e32b247f6b5cb99e1
3
+ size 23911310
test.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a23e18d781e64f16c05e8aa526cfef165149e76381864d671774e577f40d5d0
3
+ size 82376490
train.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a563d72c7e535cf41851c319d2b0e0b1b81b2a5647d231ce253f8d150b67bb75
3
+ size 660931986
val.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a42564af1bdada3032160b6693b0d05eb2c804b5766b5b4e28eae3c747bf32de
3
+ size 72708646